Test Report: Docker_Linux_crio_arm64 22089

                    
                      334c0a8a01ce6327cc86bd51efb70eb94afee1a0:2025-12-10:42712
                    
                

Test fail (40/316)

Order failed test Duration
38 TestAddons/serial/Volcano 0.3
44 TestAddons/parallel/Registry 15.1
45 TestAddons/parallel/RegistryCreds 0.47
46 TestAddons/parallel/Ingress 143.86
47 TestAddons/parallel/InspektorGadget 6.3
48 TestAddons/parallel/MetricsServer 6.36
50 TestAddons/parallel/CSI 52.08
51 TestAddons/parallel/Headlamp 3.38
52 TestAddons/parallel/CloudSpanner 5.33
53 TestAddons/parallel/LocalPath 8.47
54 TestAddons/parallel/NvidiaDevicePlugin 6.36
55 TestAddons/parallel/Yakd 6.29
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 501.79
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 368.27
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.5
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.36
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.36
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 734.34
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.14
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.72
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 2.97
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.27
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.65
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 3.07
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.08
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.33
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.32
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.76
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.39
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.38
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.11
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 112.27
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.21
293 TestJSONOutput/pause/Command 1.72
299 TestJSONOutput/unpause/Command 1.92
358 TestKubernetesUpgrade 780.92
374 TestPause/serial/Pause 6.87
458 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 7200.062
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054300 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054300 addons disable volcano --alsologtostderr -v=1: exit status 11 (297.356606ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:27:27.025683  385403 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:27:27.026441  385403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:27:27.026456  385403 out.go:374] Setting ErrFile to fd 2...
	I1210 07:27:27.026463  385403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:27:27.026749  385403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:27:27.027068  385403 mustload.go:66] Loading cluster: addons-054300
	I1210 07:27:27.027463  385403 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:27:27.027485  385403 addons.go:622] checking whether the cluster is paused
	I1210 07:27:27.027603  385403 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:27:27.027618  385403 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:27:27.028132  385403 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:27:27.051695  385403 ssh_runner.go:195] Run: systemctl --version
	I1210 07:27:27.051761  385403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:27:27.068767  385403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:27:27.165428  385403 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:27:27.165507  385403 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:27:27.200283  385403 cri.go:89] found id: "93c6c5614c0a9a2eb74cabfde767b532e188ff8cd89a94813838268e5804151b"
	I1210 07:27:27.200307  385403 cri.go:89] found id: "2f1be47dacec87c0a56b3cd1eb73c6d247005830f78472193131c56d054bcc4a"
	I1210 07:27:27.200312  385403 cri.go:89] found id: "c796a7524dca5dedee39b3961c5997d3d72f431343a00508663733fc1be6878c"
	I1210 07:27:27.200316  385403 cri.go:89] found id: "31f1c3a5096a8785d7b60bd481c9f3c1cf0709868ce6cb1cc139db75f52c1f42"
	I1210 07:27:27.200320  385403 cri.go:89] found id: "7eebbffe34f04d7ad5bdc3e5c3aef890263f8a90ca947aaeeca6359c36cc1191"
	I1210 07:27:27.200323  385403 cri.go:89] found id: "7513d10c4f49be5d24f91c1199a5db71465badd2bf9a2a1a84bcf9829a3c814e"
	I1210 07:27:27.200326  385403 cri.go:89] found id: "5668f24e462ca639e29b3d82f2cc864344a08ca26a729049642a2924016b13e4"
	I1210 07:27:27.200330  385403 cri.go:89] found id: "03d15632d265591c79e3367a3210fd42b6febf2d5d06f75bf11cf5ba909f8df7"
	I1210 07:27:27.200332  385403 cri.go:89] found id: "066dde63f910d4f04f548debd89f326b5a438ce283264953c1a7666cbeaf01dd"
	I1210 07:27:27.200359  385403 cri.go:89] found id: "607af77f9ee4c14a40adf7d8a6d9170dbdb3216cdeee2e67303dfa6b22ceb2ab"
	I1210 07:27:27.200369  385403 cri.go:89] found id: "b99ece48f039a1adf74d1e66696d0dfe4716da14982d4112abccc91ba77922e1"
	I1210 07:27:27.200373  385403 cri.go:89] found id: "44b34e0a56c7013a6bda4740b0b76cc47606a7950d05f25fc59201a8c1464857"
	I1210 07:27:27.200377  385403 cri.go:89] found id: "b51e7b373a333c5307dd63dd8ebd295dff1e1b749b51e9aaca3e63155b75c735"
	I1210 07:27:27.200380  385403 cri.go:89] found id: "4c160e32d403d540fd0f5076cbbb334cc640351c4da249136985c61c8ddb07f9"
	I1210 07:27:27.200383  385403 cri.go:89] found id: "ea57c044de907492a9a5cd673e06bf24ec59b5559c417648fe898ca3c4fa4e4f"
	I1210 07:27:27.200394  385403 cri.go:89] found id: "010ebc9ab887dac18a762a98373849d39c683302618e70cd88a2ca7e38a5535f"
	I1210 07:27:27.200402  385403 cri.go:89] found id: "bf6af03dc750859dc6c94c9e6d82d48b4876eee71ec2028f9a291deb7578562c"
	I1210 07:27:27.200407  385403 cri.go:89] found id: "cd4a11fe2765202ff0a3776d921d19a73423afa2a34ebc3326d6039c5e6a30d7"
	I1210 07:27:27.200410  385403 cri.go:89] found id: "423282b955e322a79f61f0e6396aaf4856be9146d169ba0017ff6c9344317365"
	I1210 07:27:27.200414  385403 cri.go:89] found id: "6f7abeab2dc46acc945548fb9227342447b975273c96c12c95c3030a8fe43009"
	I1210 07:27:27.200418  385403 cri.go:89] found id: "7e676b17ce03a2d6e29e4632f9bf789ed23b923d4e1656d76c8dc18d9a19f765"
	I1210 07:27:27.200440  385403 cri.go:89] found id: "f47e61eab5569fd31608bd377428e8097fb04caa8d1759ffcd5dafe6a7257e83"
	I1210 07:27:27.200447  385403 cri.go:89] found id: "6f9a84d527a09dd38aac5cf60b6359dfddc676ea260108b8f4f852340f51772f"
	I1210 07:27:27.200450  385403 cri.go:89] found id: ""
	I1210 07:27:27.200501  385403 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 07:27:27.215650  385403 out.go:203] 
	W1210 07:27:27.218468  385403 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:27:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:27:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 07:27:27.218487  385403 out.go:285] * 
	* 
	W1210 07:27:27.229743  385403 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:27:27.232889  385403 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-054300 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 14.33911ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-rgr2q" [baa76559-de43-4366-a7af-949a6b1936c6] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003478731s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-x77gq" [2e689e98-e3b0-48c0-9457-3d5b5f99e3df] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.012020787s
addons_test.go:394: (dbg) Run:  kubectl --context addons-054300 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-054300 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-054300 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.56409435s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-054300 ip
2025/12/10 07:27:51 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054300 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054300 addons disable registry --alsologtostderr -v=1: exit status 11 (257.419473ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:27:51.522791  385864 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:27:51.523550  385864 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:27:51.523568  385864 out.go:374] Setting ErrFile to fd 2...
	I1210 07:27:51.523574  385864 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:27:51.523860  385864 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:27:51.524203  385864 mustload.go:66] Loading cluster: addons-054300
	I1210 07:27:51.524604  385864 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:27:51.524624  385864 addons.go:622] checking whether the cluster is paused
	I1210 07:27:51.524737  385864 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:27:51.524754  385864 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:27:51.525388  385864 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:27:51.542604  385864 ssh_runner.go:195] Run: systemctl --version
	I1210 07:27:51.542673  385864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:27:51.561937  385864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:27:51.661590  385864 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:27:51.661696  385864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:27:51.692234  385864 cri.go:89] found id: "93c6c5614c0a9a2eb74cabfde767b532e188ff8cd89a94813838268e5804151b"
	I1210 07:27:51.692258  385864 cri.go:89] found id: "2f1be47dacec87c0a56b3cd1eb73c6d247005830f78472193131c56d054bcc4a"
	I1210 07:27:51.692264  385864 cri.go:89] found id: "c796a7524dca5dedee39b3961c5997d3d72f431343a00508663733fc1be6878c"
	I1210 07:27:51.692275  385864 cri.go:89] found id: "31f1c3a5096a8785d7b60bd481c9f3c1cf0709868ce6cb1cc139db75f52c1f42"
	I1210 07:27:51.692279  385864 cri.go:89] found id: "7eebbffe34f04d7ad5bdc3e5c3aef890263f8a90ca947aaeeca6359c36cc1191"
	I1210 07:27:51.692283  385864 cri.go:89] found id: "7513d10c4f49be5d24f91c1199a5db71465badd2bf9a2a1a84bcf9829a3c814e"
	I1210 07:27:51.692286  385864 cri.go:89] found id: "5668f24e462ca639e29b3d82f2cc864344a08ca26a729049642a2924016b13e4"
	I1210 07:27:51.692289  385864 cri.go:89] found id: "03d15632d265591c79e3367a3210fd42b6febf2d5d06f75bf11cf5ba909f8df7"
	I1210 07:27:51.692292  385864 cri.go:89] found id: "066dde63f910d4f04f548debd89f326b5a438ce283264953c1a7666cbeaf01dd"
	I1210 07:27:51.692299  385864 cri.go:89] found id: "607af77f9ee4c14a40adf7d8a6d9170dbdb3216cdeee2e67303dfa6b22ceb2ab"
	I1210 07:27:51.692308  385864 cri.go:89] found id: "b99ece48f039a1adf74d1e66696d0dfe4716da14982d4112abccc91ba77922e1"
	I1210 07:27:51.692311  385864 cri.go:89] found id: "44b34e0a56c7013a6bda4740b0b76cc47606a7950d05f25fc59201a8c1464857"
	I1210 07:27:51.692315  385864 cri.go:89] found id: "b51e7b373a333c5307dd63dd8ebd295dff1e1b749b51e9aaca3e63155b75c735"
	I1210 07:27:51.692318  385864 cri.go:89] found id: "4c160e32d403d540fd0f5076cbbb334cc640351c4da249136985c61c8ddb07f9"
	I1210 07:27:51.692321  385864 cri.go:89] found id: "ea57c044de907492a9a5cd673e06bf24ec59b5559c417648fe898ca3c4fa4e4f"
	I1210 07:27:51.692326  385864 cri.go:89] found id: "010ebc9ab887dac18a762a98373849d39c683302618e70cd88a2ca7e38a5535f"
	I1210 07:27:51.692341  385864 cri.go:89] found id: "bf6af03dc750859dc6c94c9e6d82d48b4876eee71ec2028f9a291deb7578562c"
	I1210 07:27:51.692346  385864 cri.go:89] found id: "cd4a11fe2765202ff0a3776d921d19a73423afa2a34ebc3326d6039c5e6a30d7"
	I1210 07:27:51.692349  385864 cri.go:89] found id: "423282b955e322a79f61f0e6396aaf4856be9146d169ba0017ff6c9344317365"
	I1210 07:27:51.692352  385864 cri.go:89] found id: "6f7abeab2dc46acc945548fb9227342447b975273c96c12c95c3030a8fe43009"
	I1210 07:27:51.692357  385864 cri.go:89] found id: "7e676b17ce03a2d6e29e4632f9bf789ed23b923d4e1656d76c8dc18d9a19f765"
	I1210 07:27:51.692360  385864 cri.go:89] found id: "f47e61eab5569fd31608bd377428e8097fb04caa8d1759ffcd5dafe6a7257e83"
	I1210 07:27:51.692363  385864 cri.go:89] found id: "6f9a84d527a09dd38aac5cf60b6359dfddc676ea260108b8f4f852340f51772f"
	I1210 07:27:51.692366  385864 cri.go:89] found id: ""
	I1210 07:27:51.692448  385864 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 07:27:51.707969  385864 out.go:203] 
	W1210 07:27:51.711001  385864 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:27:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:27:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 07:27:51.711029  385864 out.go:285] * 
	* 
	W1210 07:27:51.716617  385864 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:27:51.719671  385864 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-054300 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.10s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.47s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.571157ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-054300
addons_test.go:334: (dbg) Run:  kubectl --context addons-054300 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054300 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054300 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (243.757602ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:28:34.500347  387792 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:28:34.501106  387792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:28:34.501142  387792 out.go:374] Setting ErrFile to fd 2...
	I1210 07:28:34.501167  387792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:28:34.501596  387792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:28:34.501983  387792 mustload.go:66] Loading cluster: addons-054300
	I1210 07:28:34.502793  387792 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:28:34.502840  387792 addons.go:622] checking whether the cluster is paused
	I1210 07:28:34.503004  387792 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:28:34.503078  387792 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:28:34.503968  387792 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:28:34.521484  387792 ssh_runner.go:195] Run: systemctl --version
	I1210 07:28:34.521538  387792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:28:34.539927  387792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:28:34.637643  387792 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:28:34.637729  387792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:28:34.666610  387792 cri.go:89] found id: "93c6c5614c0a9a2eb74cabfde767b532e188ff8cd89a94813838268e5804151b"
	I1210 07:28:34.666629  387792 cri.go:89] found id: "2f1be47dacec87c0a56b3cd1eb73c6d247005830f78472193131c56d054bcc4a"
	I1210 07:28:34.666634  387792 cri.go:89] found id: "c796a7524dca5dedee39b3961c5997d3d72f431343a00508663733fc1be6878c"
	I1210 07:28:34.666637  387792 cri.go:89] found id: "31f1c3a5096a8785d7b60bd481c9f3c1cf0709868ce6cb1cc139db75f52c1f42"
	I1210 07:28:34.666641  387792 cri.go:89] found id: "7eebbffe34f04d7ad5bdc3e5c3aef890263f8a90ca947aaeeca6359c36cc1191"
	I1210 07:28:34.666644  387792 cri.go:89] found id: "7513d10c4f49be5d24f91c1199a5db71465badd2bf9a2a1a84bcf9829a3c814e"
	I1210 07:28:34.666648  387792 cri.go:89] found id: "5668f24e462ca639e29b3d82f2cc864344a08ca26a729049642a2924016b13e4"
	I1210 07:28:34.666651  387792 cri.go:89] found id: "03d15632d265591c79e3367a3210fd42b6febf2d5d06f75bf11cf5ba909f8df7"
	I1210 07:28:34.666656  387792 cri.go:89] found id: "066dde63f910d4f04f548debd89f326b5a438ce283264953c1a7666cbeaf01dd"
	I1210 07:28:34.666662  387792 cri.go:89] found id: "607af77f9ee4c14a40adf7d8a6d9170dbdb3216cdeee2e67303dfa6b22ceb2ab"
	I1210 07:28:34.666665  387792 cri.go:89] found id: "b99ece48f039a1adf74d1e66696d0dfe4716da14982d4112abccc91ba77922e1"
	I1210 07:28:34.666668  387792 cri.go:89] found id: "44b34e0a56c7013a6bda4740b0b76cc47606a7950d05f25fc59201a8c1464857"
	I1210 07:28:34.666671  387792 cri.go:89] found id: "b51e7b373a333c5307dd63dd8ebd295dff1e1b749b51e9aaca3e63155b75c735"
	I1210 07:28:34.666674  387792 cri.go:89] found id: "4c160e32d403d540fd0f5076cbbb334cc640351c4da249136985c61c8ddb07f9"
	I1210 07:28:34.666678  387792 cri.go:89] found id: "ea57c044de907492a9a5cd673e06bf24ec59b5559c417648fe898ca3c4fa4e4f"
	I1210 07:28:34.666683  387792 cri.go:89] found id: "010ebc9ab887dac18a762a98373849d39c683302618e70cd88a2ca7e38a5535f"
	I1210 07:28:34.666690  387792 cri.go:89] found id: "bf6af03dc750859dc6c94c9e6d82d48b4876eee71ec2028f9a291deb7578562c"
	I1210 07:28:34.666695  387792 cri.go:89] found id: "cd4a11fe2765202ff0a3776d921d19a73423afa2a34ebc3326d6039c5e6a30d7"
	I1210 07:28:34.666698  387792 cri.go:89] found id: "423282b955e322a79f61f0e6396aaf4856be9146d169ba0017ff6c9344317365"
	I1210 07:28:34.666701  387792 cri.go:89] found id: "6f7abeab2dc46acc945548fb9227342447b975273c96c12c95c3030a8fe43009"
	I1210 07:28:34.666705  387792 cri.go:89] found id: "7e676b17ce03a2d6e29e4632f9bf789ed23b923d4e1656d76c8dc18d9a19f765"
	I1210 07:28:34.666708  387792 cri.go:89] found id: "f47e61eab5569fd31608bd377428e8097fb04caa8d1759ffcd5dafe6a7257e83"
	I1210 07:28:34.666711  387792 cri.go:89] found id: "6f9a84d527a09dd38aac5cf60b6359dfddc676ea260108b8f4f852340f51772f"
	I1210 07:28:34.666714  387792 cri.go:89] found id: ""
	I1210 07:28:34.666762  387792 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 07:28:34.681571  387792 out.go:203] 
	W1210 07:28:34.684502  387792 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:28:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:28:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 07:28:34.684532  387792 out.go:285] * 
	* 
	W1210 07:28:34.690316  387792 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:28:34.693187  387792 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-054300 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.47s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (143.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-054300 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-054300 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-054300 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [f410a52f-4351-4f96-8925-ab272797d635] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [f410a52f-4351-4f96-8925-ab272797d635] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003781264s
I1210 07:28:37.326866  378528 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-054300 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054300 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.203068013s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-054300 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-054300 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-054300
helpers_test.go:244: (dbg) docker inspect addons-054300:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc22f5170a29040c792fbce50e00f81449bfae5eee3653f8df540be03d7de55a",
	        "Created": "2025-12-10T07:25:12.430115897Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 379935,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:25:12.487333909Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/dc22f5170a29040c792fbce50e00f81449bfae5eee3653f8df540be03d7de55a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc22f5170a29040c792fbce50e00f81449bfae5eee3653f8df540be03d7de55a/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc22f5170a29040c792fbce50e00f81449bfae5eee3653f8df540be03d7de55a/hosts",
	        "LogPath": "/var/lib/docker/containers/dc22f5170a29040c792fbce50e00f81449bfae5eee3653f8df540be03d7de55a/dc22f5170a29040c792fbce50e00f81449bfae5eee3653f8df540be03d7de55a-json.log",
	        "Name": "/addons-054300",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-054300:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-054300",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dc22f5170a29040c792fbce50e00f81449bfae5eee3653f8df540be03d7de55a",
	                "LowerDir": "/var/lib/docker/overlay2/518f1a22107b7ce6dc31dc1d0178c0e8732e9c2b2ac4312bf7116070f3c344af-init/diff:/var/lib/docker/overlay2/888a54fd4518421bb2e30bc2f0825f232bffa2f260991e8ba288270662a6554b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/518f1a22107b7ce6dc31dc1d0178c0e8732e9c2b2ac4312bf7116070f3c344af/merged",
	                "UpperDir": "/var/lib/docker/overlay2/518f1a22107b7ce6dc31dc1d0178c0e8732e9c2b2ac4312bf7116070f3c344af/diff",
	                "WorkDir": "/var/lib/docker/overlay2/518f1a22107b7ce6dc31dc1d0178c0e8732e9c2b2ac4312bf7116070f3c344af/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-054300",
	                "Source": "/var/lib/docker/volumes/addons-054300/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-054300",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-054300",
	                "name.minikube.sigs.k8s.io": "addons-054300",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8e20225553d498f5b200facbcb4b592c80a6f5c28a0de5dc3fadf37ea92e8446",
	            "SandboxKey": "/var/run/docker/netns/8e20225553d4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-054300": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:76:54:fa:4e:96",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9407389f9dc9cc594903ffd05e318da54dbe021a447cc9aa16ccc948c918da56",
	                    "EndpointID": "a68d08196d76b97ddf9123fb278d9e51a1da6e0e19141e1e6a97994a8f3201d6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-054300",
	                        "dc22f5170a29"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-054300 -n addons-054300
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-054300 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-054300 logs -n 25: (1.52982843s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-393659                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-393659 │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │ 10 Dec 25 07:24 UTC │
	│ start   │ --download-only -p binary-mirror-728513 --alsologtostderr --binary-mirror http://127.0.0.1:37169 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-728513   │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │                     │
	│ delete  │ -p binary-mirror-728513                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-728513   │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │ 10 Dec 25 07:24 UTC │
	│ addons  │ enable dashboard -p addons-054300                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │                     │
	│ addons  │ disable dashboard -p addons-054300                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │                     │
	│ start   │ -p addons-054300 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │ 10 Dec 25 07:27 UTC │
	│ addons  │ addons-054300 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:27 UTC │                     │
	│ addons  │ addons-054300 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:27 UTC │                     │
	│ ip      │ addons-054300 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:27 UTC │ 10 Dec 25 07:27 UTC │
	│ addons  │ addons-054300 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:27 UTC │                     │
	│ addons  │ addons-054300 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:27 UTC │                     │
	│ addons  │ addons-054300 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:28 UTC │                     │
	│ ssh     │ addons-054300 ssh cat /opt/local-path-provisioner/pvc-f752037c-3c31-451d-be8f-825295773e36_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:28 UTC │ 10 Dec 25 07:28 UTC │
	│ addons  │ addons-054300 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:28 UTC │                     │
	│ addons  │ addons-054300 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:28 UTC │                     │
	│ addons  │ enable headlamp -p addons-054300 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:28 UTC │                     │
	│ addons  │ addons-054300 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:28 UTC │                     │
	│ addons  │ addons-054300 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:28 UTC │                     │
	│ addons  │ addons-054300 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:28 UTC │                     │
	│ addons  │ addons-054300 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:28 UTC │                     │
	│ addons  │ addons-054300 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:28 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-054300                                                                                                                                                                                                                                                                                                                                                                                           │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:28 UTC │ 10 Dec 25 07:28 UTC │
	│ addons  │ addons-054300 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:28 UTC │                     │
	│ ssh     │ addons-054300 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:28 UTC │                     │
	│ ip      │ addons-054300 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:30 UTC │ 10 Dec 25 07:30 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:24:47
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:24:47.218606  379535 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:24:47.218723  379535 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:24:47.218739  379535 out.go:374] Setting ErrFile to fd 2...
	I1210 07:24:47.218746  379535 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:24:47.219143  379535 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:24:47.219687  379535 out.go:368] Setting JSON to false
	I1210 07:24:47.220529  379535 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7638,"bootTime":1765343850,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:24:47.220626  379535 start.go:143] virtualization:  
	I1210 07:24:47.224498  379535 out.go:179] * [addons-054300] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:24:47.228394  379535 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:24:47.228480  379535 notify.go:221] Checking for updates...
	I1210 07:24:47.234361  379535 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:24:47.237347  379535 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:24:47.240337  379535 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 07:24:47.243259  379535 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:24:47.246125  379535 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:24:47.249174  379535 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:24:47.279977  379535 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:24:47.280093  379535 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:24:47.340471  379535 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-10 07:24:47.331358298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:24:47.340579  379535 docker.go:319] overlay module found
	I1210 07:24:47.343820  379535 out.go:179] * Using the docker driver based on user configuration
	I1210 07:24:47.346682  379535 start.go:309] selected driver: docker
	I1210 07:24:47.346700  379535 start.go:927] validating driver "docker" against <nil>
	I1210 07:24:47.346713  379535 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:24:47.347458  379535 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:24:47.404159  379535 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-10 07:24:47.394471336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:24:47.404313  379535 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:24:47.404541  379535 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:24:47.407436  379535 out.go:179] * Using Docker driver with root privileges
	I1210 07:24:47.410278  379535 cni.go:84] Creating CNI manager for ""
	I1210 07:24:47.410353  379535 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:24:47.410363  379535 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 07:24:47.410446  379535 start.go:353] cluster config:
	{Name:addons-054300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-054300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1210 07:24:47.413643  379535 out.go:179] * Starting "addons-054300" primary control-plane node in "addons-054300" cluster
	I1210 07:24:47.416474  379535 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 07:24:47.419420  379535 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:24:47.422238  379535 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 07:24:47.422293  379535 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1210 07:24:47.422315  379535 cache.go:65] Caching tarball of preloaded images
	I1210 07:24:47.422414  379535 preload.go:238] Found /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1210 07:24:47.422424  379535 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 07:24:47.422765  379535 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/config.json ...
	I1210 07:24:47.422786  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/config.json: {Name:mk379c617c8daf139aef95276096a9d1c3831632 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:24:47.422946  379535 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:24:47.438786  379535 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca to local cache
	I1210 07:24:47.438932  379535 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local cache directory
	I1210 07:24:47.438951  379535 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local cache directory, skipping pull
	I1210 07:24:47.438956  379535 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in cache, skipping pull
	I1210 07:24:47.438963  379535 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca as a tarball
	I1210 07:24:47.438967  379535 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca from local cache
	I1210 07:25:05.474702  379535 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca from cached tarball
	I1210 07:25:05.474742  379535 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:25:05.474781  379535 start.go:360] acquireMachinesLock for addons-054300: {Name:mk5475be5e895678590cbabe8e033afffb7fa95a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:05.474898  379535 start.go:364] duration metric: took 93.097µs to acquireMachinesLock for "addons-054300"
	I1210 07:25:05.474929  379535 start.go:93] Provisioning new machine with config: &{Name:addons-054300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-054300 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 07:25:05.475034  379535 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:25:05.476547  379535 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1210 07:25:05.476783  379535 start.go:159] libmachine.API.Create for "addons-054300" (driver="docker")
	I1210 07:25:05.476820  379535 client.go:173] LocalClient.Create starting
	I1210 07:25:05.476934  379535 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem
	I1210 07:25:05.816166  379535 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem
	I1210 07:25:06.072129  379535 cli_runner.go:164] Run: docker network inspect addons-054300 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:25:06.087708  379535 cli_runner.go:211] docker network inspect addons-054300 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:25:06.087815  379535 network_create.go:284] running [docker network inspect addons-054300] to gather additional debugging logs...
	I1210 07:25:06.087838  379535 cli_runner.go:164] Run: docker network inspect addons-054300
	W1210 07:25:06.104201  379535 cli_runner.go:211] docker network inspect addons-054300 returned with exit code 1
	I1210 07:25:06.104241  379535 network_create.go:287] error running [docker network inspect addons-054300]: docker network inspect addons-054300: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-054300 not found
	I1210 07:25:06.104256  379535 network_create.go:289] output of [docker network inspect addons-054300]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-054300 not found
	
	** /stderr **
	I1210 07:25:06.104394  379535 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:25:06.121312  379535 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001941c80}
	I1210 07:25:06.121359  379535 network_create.go:124] attempt to create docker network addons-054300 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1210 07:25:06.121416  379535 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-054300 addons-054300
	I1210 07:25:06.178427  379535 network_create.go:108] docker network addons-054300 192.168.49.0/24 created
	I1210 07:25:06.178462  379535 kic.go:121] calculated static IP "192.168.49.2" for the "addons-054300" container
	I1210 07:25:06.178536  379535 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:25:06.193801  379535 cli_runner.go:164] Run: docker volume create addons-054300 --label name.minikube.sigs.k8s.io=addons-054300 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:25:06.209746  379535 oci.go:103] Successfully created a docker volume addons-054300
	I1210 07:25:06.209846  379535 cli_runner.go:164] Run: docker run --rm --name addons-054300-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-054300 --entrypoint /usr/bin/test -v addons-054300:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 07:25:08.361395  379535 cli_runner.go:217] Completed: docker run --rm --name addons-054300-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-054300 --entrypoint /usr/bin/test -v addons-054300:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib: (2.151494222s)
	I1210 07:25:08.361430  379535 oci.go:107] Successfully prepared a docker volume addons-054300
	I1210 07:25:08.361476  379535 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 07:25:08.361489  379535 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 07:25:08.361561  379535 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-054300:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 07:25:12.356494  379535 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-054300:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (3.994893742s)
	I1210 07:25:12.356527  379535 kic.go:203] duration metric: took 3.995033912s to extract preloaded images to volume ...
	W1210 07:25:12.356667  379535 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 07:25:12.356785  379535 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:25:12.415334  379535 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-054300 --name addons-054300 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-054300 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-054300 --network addons-054300 --ip 192.168.49.2 --volume addons-054300:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	I1210 07:25:12.685650  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Running}}
	I1210 07:25:12.707897  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:12.730739  379535 cli_runner.go:164] Run: docker exec addons-054300 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:25:12.782213  379535 oci.go:144] the created container "addons-054300" has a running status.
	I1210 07:25:12.782240  379535 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa...
	I1210 07:25:13.353580  379535 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:25:13.373998  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:13.390770  379535 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:25:13.390796  379535 kic_runner.go:114] Args: [docker exec --privileged addons-054300 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:25:13.433178  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:13.450298  379535 machine.go:94] provisionDockerMachine start ...
	I1210 07:25:13.450417  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:13.467961  379535 main.go:143] libmachine: Using SSH client type: native
	I1210 07:25:13.468294  379535 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1210 07:25:13.468311  379535 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:25:13.468925  379535 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35952->127.0.0.1:33143: read: connection reset by peer
	I1210 07:25:16.602423  379535 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-054300
	
	I1210 07:25:16.602447  379535 ubuntu.go:182] provisioning hostname "addons-054300"
	I1210 07:25:16.602512  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:16.619323  379535 main.go:143] libmachine: Using SSH client type: native
	I1210 07:25:16.619635  379535 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1210 07:25:16.619650  379535 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-054300 && echo "addons-054300" | sudo tee /etc/hostname
	I1210 07:25:16.761901  379535 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-054300
	
	I1210 07:25:16.761998  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:16.780775  379535 main.go:143] libmachine: Using SSH client type: native
	I1210 07:25:16.781102  379535 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1210 07:25:16.781117  379535 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-054300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-054300/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-054300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:25:16.919095  379535 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:25:16.919125  379535 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-376671/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-376671/.minikube}
	I1210 07:25:16.919156  379535 ubuntu.go:190] setting up certificates
	I1210 07:25:16.919173  379535 provision.go:84] configureAuth start
	I1210 07:25:16.919236  379535 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-054300
	I1210 07:25:16.935792  379535 provision.go:143] copyHostCerts
	I1210 07:25:16.935880  379535 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem (1675 bytes)
	I1210 07:25:16.936000  379535 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem (1082 bytes)
	I1210 07:25:16.936060  379535 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem (1123 bytes)
	I1210 07:25:16.936104  379535 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem org=jenkins.addons-054300 san=[127.0.0.1 192.168.49.2 addons-054300 localhost minikube]
	I1210 07:25:17.210221  379535 provision.go:177] copyRemoteCerts
	I1210 07:25:17.210290  379535 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:25:17.210339  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:17.227542  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:17.322802  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:25:17.340341  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 07:25:17.357777  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:25:17.377216  379535 provision.go:87] duration metric: took 458.025542ms to configureAuth
	I1210 07:25:17.377248  379535 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:25:17.377447  379535 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:25:17.377559  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:17.394952  379535 main.go:143] libmachine: Using SSH client type: native
	I1210 07:25:17.395314  379535 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1210 07:25:17.395337  379535 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 07:25:17.682930  379535 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 07:25:17.682949  379535 machine.go:97] duration metric: took 4.23262496s to provisionDockerMachine
	I1210 07:25:17.682959  379535 client.go:176] duration metric: took 12.20613005s to LocalClient.Create
	I1210 07:25:17.682970  379535 start.go:167] duration metric: took 12.206188783s to libmachine.API.Create "addons-054300"
	I1210 07:25:17.682976  379535 start.go:293] postStartSetup for "addons-054300" (driver="docker")
	I1210 07:25:17.682986  379535 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:25:17.683079  379535 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:25:17.683132  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:17.700183  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:17.794603  379535 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:25:17.797708  379535 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:25:17.797737  379535 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:25:17.797749  379535 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/addons for local assets ...
	I1210 07:25:17.797814  379535 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/files for local assets ...
	I1210 07:25:17.797841  379535 start.go:296] duration metric: took 114.859215ms for postStartSetup
	I1210 07:25:17.798153  379535 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-054300
	I1210 07:25:17.814914  379535 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/config.json ...
	I1210 07:25:17.815226  379535 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:25:17.815279  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:17.831700  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:17.928184  379535 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:25:17.932943  379535 start.go:128] duration metric: took 12.457888416s to createHost
	I1210 07:25:17.932971  379535 start.go:83] releasing machines lock for "addons-054300", held for 12.458057295s
	I1210 07:25:17.933042  379535 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-054300
	I1210 07:25:17.955083  379535 ssh_runner.go:195] Run: cat /version.json
	I1210 07:25:17.955123  379535 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:25:17.955156  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:17.955203  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:17.973722  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:17.976966  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:18.166947  379535 ssh_runner.go:195] Run: systemctl --version
	I1210 07:25:18.173006  379535 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 07:25:18.210955  379535 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:25:18.215736  379535 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:25:18.215812  379535 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:25:18.246211  379535 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 07:25:18.246239  379535 start.go:496] detecting cgroup driver to use...
	I1210 07:25:18.246273  379535 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:25:18.246328  379535 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:25:18.263964  379535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:25:18.277028  379535 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:25:18.277114  379535 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:25:18.294661  379535 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:25:18.313256  379535 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:25:18.434176  379535 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:25:18.549532  379535 docker.go:234] disabling docker service ...
	I1210 07:25:18.549646  379535 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:25:18.570441  379535 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:25:18.583494  379535 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:25:18.697608  379535 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:25:18.818933  379535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:25:18.832562  379535 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:25:18.848261  379535 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 07:25:18.848340  379535 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:25:18.857134  379535 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 07:25:18.857257  379535 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:25:18.866348  379535 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:25:18.875386  379535 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:25:18.884266  379535 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:25:18.892384  379535 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:25:18.900937  379535 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:25:18.913750  379535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:25:18.922204  379535 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:25:18.929700  379535 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:25:18.936714  379535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:25:19.059141  379535 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 07:25:19.230038  379535 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 07:25:19.230142  379535 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 07:25:19.234032  379535 start.go:564] Will wait 60s for crictl version
	I1210 07:25:19.234118  379535 ssh_runner.go:195] Run: which crictl
	I1210 07:25:19.237507  379535 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:25:19.264531  379535 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 07:25:19.264689  379535 ssh_runner.go:195] Run: crio --version
	I1210 07:25:19.295296  379535 ssh_runner.go:195] Run: crio --version
	I1210 07:25:19.329827  379535 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1210 07:25:19.332622  379535 cli_runner.go:164] Run: docker network inspect addons-054300 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:25:19.349151  379535 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 07:25:19.352931  379535 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:25:19.362552  379535 kubeadm.go:884] updating cluster {Name:addons-054300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-054300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:25:19.362681  379535 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 07:25:19.362745  379535 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:25:19.400212  379535 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:25:19.400240  379535 crio.go:433] Images already preloaded, skipping extraction
	I1210 07:25:19.400297  379535 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:25:19.424822  379535 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:25:19.424849  379535 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:25:19.424857  379535 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1210 07:25:19.424955  379535 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-054300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-054300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:25:19.425039  379535 ssh_runner.go:195] Run: crio config
	I1210 07:25:19.475377  379535 cni.go:84] Creating CNI manager for ""
	I1210 07:25:19.475400  379535 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:25:19.475418  379535 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:25:19.475442  379535 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-054300 NodeName:addons-054300 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:25:19.475575  379535 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-054300"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:25:19.475656  379535 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 07:25:19.483645  379535 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:25:19.483720  379535 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:25:19.491413  379535 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1210 07:25:19.504415  379535 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 07:25:19.517408  379535 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1210 07:25:19.530382  379535 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:25:19.534012  379535 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:25:19.544387  379535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:25:19.651925  379535 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:25:19.668332  379535 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300 for IP: 192.168.49.2
	I1210 07:25:19.668356  379535 certs.go:195] generating shared ca certs ...
	I1210 07:25:19.668386  379535 certs.go:227] acquiring lock for ca certs: {Name:mk8f0c12407c084bd20b81428ac0dabca9a8cbd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:19.668540  379535 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key
	I1210 07:25:20.330721  379535 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt ...
	I1210 07:25:20.330758  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt: {Name:mk3644e11bdcb0925a9a05bad1e0e3fca414ff61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:20.330994  379535 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key ...
	I1210 07:25:20.331036  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key: {Name:mkbd491725c3973182b429cc0698bef0142dee42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:20.331139  379535 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key
	I1210 07:25:20.652005  379535 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt ...
	I1210 07:25:20.652039  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt: {Name:mk1d9cab3816c24cf58418acb5b2427e8af1ed22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:20.652235  379535 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key ...
	I1210 07:25:20.652249  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key: {Name:mk5cf3672a5f26dcedf3b7e878f4e247d5d21fc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:20.652331  379535 certs.go:257] generating profile certs ...
	I1210 07:25:20.652397  379535 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.key
	I1210 07:25:20.652415  379535 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt with IP's: []
	I1210 07:25:21.188695  379535 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt ...
	I1210 07:25:21.188730  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: {Name:mk390f4e644bc83243db754d72329bce977b5ca9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:21.188931  379535 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.key ...
	I1210 07:25:21.188946  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.key: {Name:mk54c3651d6e559b24dc9640918369d8c10570cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:21.189036  379535 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.key.5ca2ca84
	I1210 07:25:21.189058  379535 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.crt.5ca2ca84 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1210 07:25:21.496556  379535 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.crt.5ca2ca84 ...
	I1210 07:25:21.496592  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.crt.5ca2ca84: {Name:mk66955cc7b6be36bc2ca2ad143c24a06520bbaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:21.496774  379535 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.key.5ca2ca84 ...
	I1210 07:25:21.496793  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.key.5ca2ca84: {Name:mk36e26c8bb9140805c993e13a8c5793bb88983a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:21.496880  379535 certs.go:382] copying /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.crt.5ca2ca84 -> /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.crt
	I1210 07:25:21.496961  379535 certs.go:386] copying /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.key.5ca2ca84 -> /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.key
	I1210 07:25:21.497020  379535 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/proxy-client.key
	I1210 07:25:21.497046  379535 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/proxy-client.crt with IP's: []
	I1210 07:25:22.020101  379535 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/proxy-client.crt ...
	I1210 07:25:22.020135  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/proxy-client.crt: {Name:mk170e6f876c7bd4d99312da16ff5bcd9a092f47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:22.020328  379535 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/proxy-client.key ...
	I1210 07:25:22.020345  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/proxy-client.key: {Name:mkc75639790e7f9e05cc24c3d1c0a1a459121603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:22.020536  379535 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:25:22.020585  379535 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:25:22.020613  379535 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:25:22.020656  379535 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem (1675 bytes)
	I1210 07:25:22.021249  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:25:22.040846  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 07:25:22.059006  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:25:22.077418  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:25:22.095361  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 07:25:22.112988  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 07:25:22.130149  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:25:22.147224  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 07:25:22.164232  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:25:22.181666  379535 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:25:22.194110  379535 ssh_runner.go:195] Run: openssl version
	I1210 07:25:22.200299  379535 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:25:22.208135  379535 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:25:22.215542  379535 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:25:22.219383  379535 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 07:25 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:25:22.219498  379535 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:25:22.261025  379535 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:25:22.268549  379535 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:25:22.275937  379535 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:25:22.279527  379535 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:25:22.279595  379535 kubeadm.go:401] StartCluster: {Name:addons-054300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-054300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:25:22.279695  379535 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:25:22.279783  379535 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:25:22.306924  379535 cri.go:89] found id: ""
	I1210 07:25:22.307001  379535 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:25:22.314946  379535 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:25:22.322827  379535 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:25:22.322891  379535 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:25:22.330873  379535 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:25:22.330903  379535 kubeadm.go:158] found existing configuration files:
	
	I1210 07:25:22.330959  379535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:25:22.338936  379535 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:25:22.339103  379535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:25:22.346675  379535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:25:22.354346  379535 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:25:22.354436  379535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:25:22.362242  379535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:25:22.370104  379535 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:25:22.370182  379535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:25:22.377766  379535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:25:22.385673  379535 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:25:22.385787  379535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:25:22.393336  379535 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:25:22.432359  379535 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 07:25:22.432454  379535 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:25:22.455812  379535 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:25:22.455913  379535 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:25:22.455975  379535 kubeadm.go:319] OS: Linux
	I1210 07:25:22.456062  379535 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:25:22.456150  379535 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:25:22.456214  379535 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:25:22.456271  379535 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:25:22.456327  379535 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:25:22.456382  379535 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:25:22.456434  379535 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:25:22.456489  379535 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:25:22.456547  379535 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:25:22.518675  379535 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:25:22.518827  379535 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:25:22.518959  379535 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:25:22.531396  379535 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:25:22.536637  379535 out.go:252]   - Generating certificates and keys ...
	I1210 07:25:22.536747  379535 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:25:22.536856  379535 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:25:22.860248  379535 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:25:23.796824  379535 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:25:23.896546  379535 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:25:24.409296  379535 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:25:24.623553  379535 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:25:24.623723  379535 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-054300 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 07:25:24.799890  379535 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:25:24.800068  379535 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-054300 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 07:25:25.268967  379535 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:25:25.583914  379535 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:25:25.907598  379535 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:25:25.907845  379535 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:25:26.166682  379535 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:25:26.771378  379535 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:25:26.870057  379535 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:25:27.567028  379535 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:25:28.426381  379535 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:25:28.426974  379535 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:25:28.429810  379535 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:25:28.434002  379535 out.go:252]   - Booting up control plane ...
	I1210 07:25:28.434113  379535 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:25:28.434191  379535 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:25:28.434268  379535 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:25:28.449101  379535 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:25:28.449449  379535 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:25:28.458111  379535 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:25:28.458235  379535 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:25:28.458302  379535 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:25:28.604351  379535 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:25:28.604473  379535 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:25:30.107599  379535 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501826536s
	I1210 07:25:30.109958  379535 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 07:25:30.110061  379535 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1210 07:25:30.110347  379535 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 07:25:30.110435  379535 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 07:25:33.927764  379535 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.816814046s
	I1210 07:25:34.618432  379535 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.50843364s
	I1210 07:25:36.611588  379535 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.50149902s
	I1210 07:25:36.645771  379535 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 07:25:36.661506  379535 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 07:25:36.679956  379535 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 07:25:36.680173  379535 kubeadm.go:319] [mark-control-plane] Marking the node addons-054300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 07:25:36.692487  379535 kubeadm.go:319] [bootstrap-token] Using token: j2595f.2uzlcpoq828sdy0t
	I1210 07:25:36.695598  379535 out.go:252]   - Configuring RBAC rules ...
	I1210 07:25:36.695729  379535 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 07:25:36.701812  379535 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 07:25:36.710331  379535 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 07:25:36.714509  379535 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 07:25:36.718647  379535 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 07:25:36.723470  379535 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 07:25:37.020842  379535 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 07:25:37.445442  379535 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 07:25:38.020051  379535 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 07:25:38.021570  379535 kubeadm.go:319] 
	I1210 07:25:38.021649  379535 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 07:25:38.021654  379535 kubeadm.go:319] 
	I1210 07:25:38.021733  379535 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 07:25:38.021737  379535 kubeadm.go:319] 
	I1210 07:25:38.021770  379535 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 07:25:38.021833  379535 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 07:25:38.021883  379535 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 07:25:38.021887  379535 kubeadm.go:319] 
	I1210 07:25:38.021941  379535 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 07:25:38.021945  379535 kubeadm.go:319] 
	I1210 07:25:38.021992  379535 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 07:25:38.021997  379535 kubeadm.go:319] 
	I1210 07:25:38.022048  379535 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 07:25:38.022133  379535 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 07:25:38.022202  379535 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 07:25:38.022206  379535 kubeadm.go:319] 
	I1210 07:25:38.022290  379535 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 07:25:38.022369  379535 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 07:25:38.022373  379535 kubeadm.go:319] 
	I1210 07:25:38.022457  379535 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token j2595f.2uzlcpoq828sdy0t \
	I1210 07:25:38.022560  379535 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:54503a554dcd3ad3945fa55f63c2936466b69a16c4d6182df26a96009ed0cd66 \
	I1210 07:25:38.022580  379535 kubeadm.go:319] 	--control-plane 
	I1210 07:25:38.022584  379535 kubeadm.go:319] 
	I1210 07:25:38.022669  379535 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 07:25:38.022673  379535 kubeadm.go:319] 
	I1210 07:25:38.022755  379535 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token j2595f.2uzlcpoq828sdy0t \
	I1210 07:25:38.022857  379535 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:54503a554dcd3ad3945fa55f63c2936466b69a16c4d6182df26a96009ed0cd66 
	I1210 07:25:38.026853  379535 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1210 07:25:38.027129  379535 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:25:38.027246  379535 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:25:38.027271  379535 cni.go:84] Creating CNI manager for ""
	I1210 07:25:38.027282  379535 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:25:38.030419  379535 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 07:25:38.033293  379535 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 07:25:38.038407  379535 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1210 07:25:38.038436  379535 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 07:25:38.054611  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 07:25:38.365291  379535 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 07:25:38.365451  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:25:38.365537  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-054300 minikube.k8s.io/updated_at=2025_12_10T07_25_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9 minikube.k8s.io/name=addons-054300 minikube.k8s.io/primary=true
	I1210 07:25:38.391817  379535 ops.go:34] apiserver oom_adj: -16
	I1210 07:25:38.570561  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:25:39.071452  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:25:39.571206  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:25:40.070811  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:25:40.570800  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:25:41.071282  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:25:41.571285  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:25:42.071130  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:25:42.571192  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:25:42.748789  379535 kubeadm.go:1114] duration metric: took 4.383396627s to wait for elevateKubeSystemPrivileges
	I1210 07:25:42.748816  379535 kubeadm.go:403] duration metric: took 20.469243282s to StartCluster
	I1210 07:25:42.748834  379535 settings.go:142] acquiring lock: {Name:mk83336eaf1e9f7632e16e15e8d9e14eb0e0d0c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:42.748944  379535 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:25:42.749383  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/kubeconfig: {Name:mk62f1b4a63164643ed31a5211abdadfc49db863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:42.749560  379535 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 07:25:42.749742  379535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 07:25:42.749997  379535 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:25:42.750026  379535 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1210 07:25:42.750096  379535 addons.go:70] Setting yakd=true in profile "addons-054300"
	I1210 07:25:42.750110  379535 addons.go:239] Setting addon yakd=true in "addons-054300"
	I1210 07:25:42.750132  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.750573  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.751055  379535 addons.go:70] Setting metrics-server=true in profile "addons-054300"
	I1210 07:25:42.751072  379535 addons.go:239] Setting addon metrics-server=true in "addons-054300"
	I1210 07:25:42.751093  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.751511  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.751693  379535 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-054300"
	I1210 07:25:42.751722  379535 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-054300"
	I1210 07:25:42.751746  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.752162  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.755088  379535 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-054300"
	I1210 07:25:42.755120  379535 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-054300"
	I1210 07:25:42.755162  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.755649  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.758822  379535 addons.go:70] Setting registry=true in profile "addons-054300"
	I1210 07:25:42.758901  379535 addons.go:239] Setting addon registry=true in "addons-054300"
	I1210 07:25:42.758949  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.759568  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.766084  379535 addons.go:70] Setting cloud-spanner=true in profile "addons-054300"
	I1210 07:25:42.766131  379535 addons.go:239] Setting addon cloud-spanner=true in "addons-054300"
	I1210 07:25:42.766167  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.766670  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.770271  379535 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-054300"
	I1210 07:25:42.770348  379535 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-054300"
	I1210 07:25:42.770379  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.770846  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.774729  379535 addons.go:70] Setting registry-creds=true in profile "addons-054300"
	I1210 07:25:42.774777  379535 addons.go:239] Setting addon registry-creds=true in "addons-054300"
	I1210 07:25:42.774814  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.775337  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.776781  379535 addons.go:70] Setting default-storageclass=true in profile "addons-054300"
	I1210 07:25:42.776844  379535 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-054300"
	I1210 07:25:42.777403  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.785668  379535 addons.go:70] Setting storage-provisioner=true in profile "addons-054300"
	I1210 07:25:42.785704  379535 addons.go:239] Setting addon storage-provisioner=true in "addons-054300"
	I1210 07:25:42.785744  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.786236  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.793721  379535 addons.go:70] Setting gcp-auth=true in profile "addons-054300"
	I1210 07:25:42.793768  379535 mustload.go:66] Loading cluster: addons-054300
	I1210 07:25:42.793976  379535 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:25:42.794235  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.797010  379535 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-054300"
	I1210 07:25:42.797046  379535 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-054300"
	I1210 07:25:42.797398  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.823327  379535 addons.go:70] Setting ingress=true in profile "addons-054300"
	I1210 07:25:42.823398  379535 addons.go:239] Setting addon ingress=true in "addons-054300"
	I1210 07:25:42.823467  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.824381  379535 addons.go:70] Setting volcano=true in profile "addons-054300"
	I1210 07:25:42.824410  379535 addons.go:239] Setting addon volcano=true in "addons-054300"
	I1210 07:25:42.824443  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.824929  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.827287  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.839584  379535 addons.go:70] Setting ingress-dns=true in profile "addons-054300"
	I1210 07:25:42.839621  379535 addons.go:239] Setting addon ingress-dns=true in "addons-054300"
	I1210 07:25:42.839670  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.840167  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.849889  379535 addons.go:70] Setting volumesnapshots=true in profile "addons-054300"
	I1210 07:25:42.849925  379535 addons.go:239] Setting addon volumesnapshots=true in "addons-054300"
	I1210 07:25:42.849960  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.850465  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.853368  379535 addons.go:70] Setting inspektor-gadget=true in profile "addons-054300"
	I1210 07:25:42.853398  379535 addons.go:239] Setting addon inspektor-gadget=true in "addons-054300"
	I1210 07:25:42.853431  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.853904  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.885681  379535 out.go:179] * Verifying Kubernetes components...
	I1210 07:25:42.888945  379535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:25:42.935871  379535 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1210 07:25:42.954425  379535 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1210 07:25:42.964823  379535 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 07:25:42.964856  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1210 07:25:42.964942  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:42.975344  379535 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-054300"
	I1210 07:25:42.975390  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.975854  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.999370  379535 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 07:25:42.999399  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1210 07:25:42.999475  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.009494  379535 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1210 07:25:43.013279  379535 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 07:25:43.013475  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:43.023138  379535 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1210 07:25:43.023339  379535 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1210 07:25:43.023482  379535 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1210 07:25:43.050392  379535 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1210 07:25:43.054368  379535 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1210 07:25:43.054434  379535 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1210 07:25:43.054535  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.058795  379535 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 07:25:43.058820  379535 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 07:25:43.058893  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.065183  379535 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1210 07:25:43.065261  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1210 07:25:43.065362  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.077872  379535 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1210 07:25:43.082385  379535 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 07:25:43.082410  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1210 07:25:43.082477  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.085796  379535 out.go:179]   - Using image docker.io/registry:3.0.0
	I1210 07:25:43.089398  379535 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1210 07:25:43.089457  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1210 07:25:43.089538  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.093508  379535 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1210 07:25:43.099438  379535 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1210 07:25:43.103943  379535 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1210 07:25:43.108559  379535 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1210 07:25:43.113009  379535 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1210 07:25:43.115268  379535 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 07:25:43.115669  379535 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1210 07:25:43.116026  379535 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1210 07:25:43.125624  379535 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 07:25:43.125656  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1210 07:25:43.125715  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.138813  379535 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1210 07:25:43.141166  379535 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:25:43.141185  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:25:43.141247  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.157909  379535 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1210 07:25:43.162018  379535 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1210 07:25:43.162044  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1210 07:25:43.162111  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.167453  379535 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1210 07:25:43.167517  379535 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1210 07:25:43.167592  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.175304  379535 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1210 07:25:43.176780  379535 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1210 07:25:43.177726  379535 addons.go:239] Setting addon default-storageclass=true in "addons-054300"
	I1210 07:25:43.181023  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:43.181486  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:43.177764  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.185884  379535 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 07:25:43.185911  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1210 07:25:43.185979  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.186658  379535 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1210 07:25:43.219889  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.257669  379535 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1210 07:25:43.261393  379535 out.go:179]   - Using image docker.io/busybox:stable
	I1210 07:25:43.261452  379535 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1210 07:25:43.264178  379535 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1210 07:25:43.264212  379535 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1210 07:25:43.264288  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.264799  379535 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 07:25:43.264810  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1210 07:25:43.264855  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.289447  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.360362  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.363518  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.364848  379535 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:25:43.364864  379535 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:25:43.365000  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.370413  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.374646  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.378905  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.383495  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.414873  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.435283  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.438931  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.446038  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.452982  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.461538  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	W1210 07:25:43.464745  379535 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1210 07:25:43.464781  379535 retry.go:31] will retry after 308.729586ms: ssh: handshake failed: EOF
	W1210 07:25:43.465808  379535 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1210 07:25:43.465830  379535 retry.go:31] will retry after 139.182681ms: ssh: handshake failed: EOF
	W1210 07:25:43.466555  379535 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1210 07:25:43.466573  379535 retry.go:31] will retry after 151.307701ms: ssh: handshake failed: EOF
	W1210 07:25:43.466916  379535 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1210 07:25:43.466930  379535 retry.go:31] will retry after 336.970388ms: ssh: handshake failed: EOF
	I1210 07:25:43.510736  379535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 07:25:43.510821  379535 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1210 07:25:43.606200  379535 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1210 07:25:43.606227  379535 retry.go:31] will retry after 360.804886ms: ssh: handshake failed: EOF
	I1210 07:25:43.816827  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 07:25:44.078612  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 07:25:44.079751  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1210 07:25:44.090200  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 07:25:44.101807  379535 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1210 07:25:44.101875  379535 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1210 07:25:44.108831  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:25:44.135445  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1210 07:25:44.199592  379535 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 07:25:44.199624  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1210 07:25:44.204341  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 07:25:44.245447  379535 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1210 07:25:44.245473  379535 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1210 07:25:44.254347  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 07:25:44.281849  379535 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1210 07:25:44.281875  379535 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1210 07:25:44.284931  379535 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1210 07:25:44.284955  379535 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1210 07:25:44.356633  379535 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 07:25:44.356661  379535 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 07:25:44.363967  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 07:25:44.380532  379535 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1210 07:25:44.380559  379535 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1210 07:25:44.520190  379535 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 07:25:44.520217  379535 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 07:25:44.522478  379535 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1210 07:25:44.522500  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1210 07:25:44.524639  379535 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1210 07:25:44.524663  379535 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1210 07:25:44.614484  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:25:44.628458  379535 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1210 07:25:44.628529  379535 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1210 07:25:44.629390  379535 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1210 07:25:44.629445  379535 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1210 07:25:44.714017  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 07:25:44.722115  379535 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1210 07:25:44.722181  379535 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1210 07:25:44.742852  379535 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1210 07:25:44.742933  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1210 07:25:44.749839  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1210 07:25:44.845846  379535 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.334978518s)
	I1210 07:25:44.846628  379535 node_ready.go:35] waiting up to 6m0s for node "addons-054300" to be "Ready" ...
	I1210 07:25:44.846706  379535 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.33593965s)
	I1210 07:25:44.846798  379535 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1210 07:25:44.944488  379535 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1210 07:25:44.944566  379535 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1210 07:25:45.017588  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.200719268s)
	I1210 07:25:45.026606  379535 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1210 07:25:45.026690  379535 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1210 07:25:45.036267  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1210 07:25:45.137819  379535 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1210 07:25:45.137926  379535 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1210 07:25:45.224194  379535 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1210 07:25:45.224374  379535 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1210 07:25:45.341278  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.262586976s)
	I1210 07:25:45.356582  379535 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-054300" context rescaled to 1 replicas
	I1210 07:25:45.379609  379535 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 07:25:45.379681  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1210 07:25:45.429938  379535 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1210 07:25:45.430010  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1210 07:25:45.446722  379535 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1210 07:25:45.446792  379535 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1210 07:25:45.461045  379535 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1210 07:25:45.461115  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1210 07:25:45.544427  379535 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1210 07:25:45.544501  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1210 07:25:45.567400  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 07:25:45.597184  379535 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 07:25:45.597265  379535 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1210 07:25:45.789118  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1210 07:25:46.852206  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:25:48.066955  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (3.987135882s)
	I1210 07:25:48.067061  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.976798742s)
	I1210 07:25:48.067114  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.958214639s)
	I1210 07:25:48.067151  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.931684581s)
	W1210 07:25:48.860299  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:25:48.961010  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.756631762s)
	I1210 07:25:48.961043  379535 addons.go:495] Verifying addon ingress=true in "addons-054300"
	I1210 07:25:48.961225  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.706849852s)
	I1210 07:25:48.961444  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.597444765s)
	I1210 07:25:48.961490  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.346985448s)
	I1210 07:25:48.961623  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.247518359s)
	I1210 07:25:48.961640  379535 addons.go:495] Verifying addon metrics-server=true in "addons-054300"
	I1210 07:25:48.961667  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.211768102s)
	I1210 07:25:48.961678  379535 addons.go:495] Verifying addon registry=true in "addons-054300"
	I1210 07:25:48.962039  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.925681048s)
	I1210 07:25:48.962296  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.394818158s)
	W1210 07:25:48.962321  379535 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 07:25:48.962341  379535 retry.go:31] will retry after 259.193709ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 07:25:48.965103  379535 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-054300 service yakd-dashboard -n yakd-dashboard
	
	I1210 07:25:48.965245  379535 out.go:179] * Verifying registry addon...
	I1210 07:25:48.965290  379535 out.go:179] * Verifying ingress addon...
	I1210 07:25:48.970623  379535 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1210 07:25:48.970698  379535 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W1210 07:25:48.979123  379535 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1210 07:25:48.979775  379535 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 07:25:48.979790  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:48.980042  379535 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1210 07:25:48.980050  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:49.221906  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 07:25:49.228714  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.439512237s)
	I1210 07:25:49.228749  379535 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-054300"
	I1210 07:25:49.231984  379535 out.go:179] * Verifying csi-hostpath-driver addon...
	I1210 07:25:49.236414  379535 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1210 07:25:49.253799  379535 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 07:25:49.253872  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:49.475112  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:49.475745  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:49.740232  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:49.974842  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:49.975185  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:50.240742  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:50.474800  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:50.475192  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:50.631155  379535 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1210 07:25:50.631319  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:50.651100  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:50.740624  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:50.757512  379535 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1210 07:25:50.770511  379535 addons.go:239] Setting addon gcp-auth=true in "addons-054300"
	I1210 07:25:50.770559  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:50.771044  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:50.788341  379535 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1210 07:25:50.788401  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:50.809701  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:50.974580  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:50.974862  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:51.239648  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:25:51.350301  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:25:51.474741  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:51.475125  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:51.741212  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:51.975869  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:51.976141  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:52.005853  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.783894619s)
	I1210 07:25:52.005991  379535 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.21762259s)
	I1210 07:25:52.009250  379535 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 07:25:52.012090  379535 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1210 07:25:52.014882  379535 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1210 07:25:52.014913  379535 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1210 07:25:52.030261  379535 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1210 07:25:52.030288  379535 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1210 07:25:52.045991  379535 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 07:25:52.046068  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1210 07:25:52.060219  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 07:25:52.239863  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:52.481877  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:52.482708  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:52.554826  379535 addons.go:495] Verifying addon gcp-auth=true in "addons-054300"
	I1210 07:25:52.558570  379535 out.go:179] * Verifying gcp-auth addon...
	I1210 07:25:52.563199  379535 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1210 07:25:52.577567  379535 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1210 07:25:52.577601  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:52.739857  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:52.974557  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:52.975047  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:53.075600  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:53.239189  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:53.474606  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:53.474931  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:53.566881  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:53.739956  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:25:53.849804  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:25:53.974081  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:53.974623  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:54.066524  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:54.239620  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:54.474013  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:54.474384  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:54.566228  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:54.740111  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:54.974323  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:54.974499  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:55.066606  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:55.239581  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:55.475049  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:55.475137  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:55.566986  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:55.739821  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:55.975082  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:55.975237  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:56.066224  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:56.239148  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:25:56.349795  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:25:56.474582  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:56.475120  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:56.566910  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:56.740124  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:56.973905  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:56.974121  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:57.066014  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:57.240239  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:57.474586  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:57.474646  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:57.566571  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:57.739715  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:57.973733  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:57.973916  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:58.066765  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:58.239590  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:58.474502  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:58.474692  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:58.566649  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:58.739528  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:25:58.850128  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:25:58.974222  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:58.974503  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:59.066479  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:59.239145  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:59.474630  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:59.474797  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:59.566711  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:59.739611  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:59.974444  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:59.974908  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:00.079545  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:00.248389  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:00.473960  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:00.474470  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:00.566256  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:00.740335  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:26:00.850691  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:26:00.974043  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:00.974243  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:01.066852  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:01.240362  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:01.474497  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:01.474663  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:01.566494  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:01.739669  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:01.974838  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:01.975319  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:02.066943  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:02.240252  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:02.474313  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:02.474596  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:02.566593  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:02.739521  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:02.974612  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:02.974987  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:03.066721  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:03.239807  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:26:03.349828  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:26:03.474790  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:03.474856  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:03.567083  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:03.739899  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:03.974397  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:03.974603  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:04.074764  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:04.239830  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:04.474334  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:04.474724  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:04.566520  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:04.739336  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:04.974447  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:04.974553  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:05.066496  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:05.239568  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:05.475094  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:05.475795  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:05.566670  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:05.739828  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:26:05.849919  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:26:05.973919  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:05.973970  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:06.067055  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:06.239904  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:06.474363  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:06.474433  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:06.566512  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:06.739292  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:06.974262  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:06.974623  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:07.067047  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:07.240144  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:07.475513  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:07.475607  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:07.566444  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:07.739328  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:26:07.850119  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:26:07.974557  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:07.974911  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:08.066956  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:08.240121  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:08.474962  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:08.475115  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:08.566772  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:08.740541  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:08.974915  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:08.974973  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:09.066443  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:09.239271  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:09.474582  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:09.474782  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:09.566293  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:09.739433  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:09.974043  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:09.974201  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:10.067853  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:10.239884  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:26:10.349517  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:26:10.478676  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:10.481342  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:10.566707  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:10.739800  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:10.974899  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:10.975200  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:11.066797  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:11.239697  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:11.474610  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:11.475080  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:11.570718  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:11.739888  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:11.974645  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:11.975056  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:12.066894  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:12.239840  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:26:12.349576  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:26:12.473931  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:12.474204  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:12.565982  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:12.740010  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:12.973699  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:12.974394  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:13.066469  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:13.239347  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:13.475139  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:13.475534  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:13.566308  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:13.740258  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:13.975626  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:13.979428  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:14.066481  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:14.239702  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:26:14.350607  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:26:14.474586  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:14.475558  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:14.566383  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:14.739376  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:14.974989  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:14.975445  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:15.066372  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:15.239234  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:15.475091  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:15.475482  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:15.566275  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:15.739389  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:15.975185  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:15.975277  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:16.067055  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:16.240069  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:16.475393  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:16.476633  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:16.566247  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:16.739103  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:26:16.849757  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:26:16.974221  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:16.974402  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:17.066549  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:17.239823  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:17.474104  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:17.474227  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:17.566079  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:17.740104  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:17.974304  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:17.974591  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:18.066741  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:18.240743  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:18.474224  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:18.474878  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:18.566921  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:18.739868  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:26:18.849890  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:26:18.974356  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:18.974560  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:19.066381  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:19.240129  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:19.474285  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:19.474423  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:19.566113  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:19.739972  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:19.974723  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:19.975260  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:20.066226  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:20.240656  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:20.474292  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:20.475055  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:20.566028  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:20.740244  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:26:20.850077  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:26:20.974244  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:20.974606  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:21.066628  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:21.239578  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:21.474325  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:21.474475  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:21.566494  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:21.739610  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:21.974483  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:21.974680  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:22.066777  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:22.239689  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:22.474224  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:22.474393  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:22.566202  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:22.740598  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:22.974669  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:22.975000  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:23.066878  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:23.239664  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:26:23.351613  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:26:23.473807  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:23.474358  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:23.566134  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:23.739982  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:23.974811  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:23.975215  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:24.067056  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:24.240028  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:24.375728  379535 node_ready.go:49] node "addons-054300" is "Ready"
	I1210 07:26:24.375760  379535 node_ready.go:38] duration metric: took 39.529026157s for node "addons-054300" to be "Ready" ...
	I1210 07:26:24.375773  379535 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:26:24.375863  379535 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:26:24.399205  379535 api_server.go:72] duration metric: took 41.649618356s to wait for apiserver process to appear ...
	I1210 07:26:24.399234  379535 api_server.go:88] waiting for apiserver healthz status ...
	I1210 07:26:24.399276  379535 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1210 07:26:24.479720  379535 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1210 07:26:24.485954  379535 api_server.go:141] control plane version: v1.34.2
	I1210 07:26:24.485981  379535 api_server.go:131] duration metric: took 86.740423ms to wait for apiserver health ...
	I1210 07:26:24.485990  379535 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 07:26:24.492998  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:24.508707  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:24.522627  379535 system_pods.go:59] 19 kube-system pods found
	I1210 07:26:24.522714  379535 system_pods.go:61] "coredns-66bc5c9577-4tklf" [7ab27a4b-c0cb-4297-82f8-c643698c8a55] Pending
	I1210 07:26:24.522737  379535 system_pods.go:61] "csi-hostpath-attacher-0" [1524b246-3702-4d62-9e4f-f22ed995293b] Pending
	I1210 07:26:24.522758  379535 system_pods.go:61] "csi-hostpath-resizer-0" [b9128a18-cb5f-4edb-8b6e-4ed22a1b87d4] Pending
	I1210 07:26:24.522791  379535 system_pods.go:61] "csi-hostpathplugin-bmkhb" [d1ebcc23-869d-4c06-8782-2482f30c7f7d] Pending
	I1210 07:26:24.522821  379535 system_pods.go:61] "etcd-addons-054300" [42d2bb8e-456d-4b5a-9b2f-87360b75f14d] Running
	I1210 07:26:24.522840  379535 system_pods.go:61] "kindnet-b47q8" [7e7edb78-2cfe-452f-b21c-fa34d8218fd7] Running
	I1210 07:26:24.522867  379535 system_pods.go:61] "kube-apiserver-addons-054300" [bf33d55e-6624-432d-9608-557cb01aa3fe] Running
	I1210 07:26:24.522885  379535 system_pods.go:61] "kube-controller-manager-addons-054300" [38540bf7-8aa3-4aeb-99bd-83b46ba127de] Running
	I1210 07:26:24.522913  379535 system_pods.go:61] "kube-ingress-dns-minikube" [fff30e1b-9736-4f98-b368-35e6ca0b24c2] Pending
	I1210 07:26:24.522942  379535 system_pods.go:61] "kube-proxy-lt4ld" [3f73faff-6f1d-4fc9-bfc8-42c1263bac55] Running
	I1210 07:26:24.522960  379535 system_pods.go:61] "kube-scheduler-addons-054300" [0c33a60e-04de-44e7-a88d-7d1cfaa2dfae] Running
	I1210 07:26:24.522977  379535 system_pods.go:61] "metrics-server-85b7d694d7-pcvgr" [c9ff9565-6dd1-4ac7-a023-ba2b2e1325bd] Pending
	I1210 07:26:24.522997  379535 system_pods.go:61] "nvidia-device-plugin-daemonset-jgw4d" [f73a2f9f-873e-49a1-b56a-3b4e891849be] Pending
	I1210 07:26:24.523041  379535 system_pods.go:61] "registry-6b586f9694-rgr2q" [baa76559-de43-4366-a7af-949a6b1936c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 07:26:24.523064  379535 system_pods.go:61] "registry-creds-764b6fb674-7pk58" [818632b4-4e74-4c94-82c0-6e672524abcb] Pending
	I1210 07:26:24.523085  379535 system_pods.go:61] "registry-proxy-x77gq" [2e689e98-e3b0-48c0-9457-3d5b5f99e3df] Pending
	I1210 07:26:24.523115  379535 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9c9b8" [bb02026d-9028-430d-8610-4b6bb683900c] Pending
	I1210 07:26:24.523136  379535 system_pods.go:61] "snapshot-controller-7d9fbc56b8-p5w2h" [a7bf4d61-ab59-4701-aaa4-0af2845b0906] Pending
	I1210 07:26:24.523154  379535 system_pods.go:61] "storage-provisioner" [c8c5b10c-5e2d-457e-8703-b06f0ac1ea85] Pending
	I1210 07:26:24.523191  379535 system_pods.go:74] duration metric: took 37.184148ms to wait for pod list to return data ...
	I1210 07:26:24.523214  379535 default_sa.go:34] waiting for default service account to be created ...
	I1210 07:26:24.537021  379535 default_sa.go:45] found service account: "default"
	I1210 07:26:24.537099  379535 default_sa.go:55] duration metric: took 13.865785ms for default service account to be created ...
	I1210 07:26:24.537127  379535 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 07:26:24.549993  379535 system_pods.go:86] 19 kube-system pods found
	I1210 07:26:24.550028  379535 system_pods.go:89] "coredns-66bc5c9577-4tklf" [7ab27a4b-c0cb-4297-82f8-c643698c8a55] Pending
	I1210 07:26:24.550035  379535 system_pods.go:89] "csi-hostpath-attacher-0" [1524b246-3702-4d62-9e4f-f22ed995293b] Pending
	I1210 07:26:24.550061  379535 system_pods.go:89] "csi-hostpath-resizer-0" [b9128a18-cb5f-4edb-8b6e-4ed22a1b87d4] Pending
	I1210 07:26:24.550068  379535 system_pods.go:89] "csi-hostpathplugin-bmkhb" [d1ebcc23-869d-4c06-8782-2482f30c7f7d] Pending
	I1210 07:26:24.550073  379535 system_pods.go:89] "etcd-addons-054300" [42d2bb8e-456d-4b5a-9b2f-87360b75f14d] Running
	I1210 07:26:24.550079  379535 system_pods.go:89] "kindnet-b47q8" [7e7edb78-2cfe-452f-b21c-fa34d8218fd7] Running
	I1210 07:26:24.550083  379535 system_pods.go:89] "kube-apiserver-addons-054300" [bf33d55e-6624-432d-9608-557cb01aa3fe] Running
	I1210 07:26:24.550087  379535 system_pods.go:89] "kube-controller-manager-addons-054300" [38540bf7-8aa3-4aeb-99bd-83b46ba127de] Running
	I1210 07:26:24.550103  379535 system_pods.go:89] "kube-ingress-dns-minikube" [fff30e1b-9736-4f98-b368-35e6ca0b24c2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 07:26:24.550109  379535 system_pods.go:89] "kube-proxy-lt4ld" [3f73faff-6f1d-4fc9-bfc8-42c1263bac55] Running
	I1210 07:26:24.550120  379535 system_pods.go:89] "kube-scheduler-addons-054300" [0c33a60e-04de-44e7-a88d-7d1cfaa2dfae] Running
	I1210 07:26:24.550124  379535 system_pods.go:89] "metrics-server-85b7d694d7-pcvgr" [c9ff9565-6dd1-4ac7-a023-ba2b2e1325bd] Pending
	I1210 07:26:24.550148  379535 system_pods.go:89] "nvidia-device-plugin-daemonset-jgw4d" [f73a2f9f-873e-49a1-b56a-3b4e891849be] Pending
	I1210 07:26:24.550155  379535 system_pods.go:89] "registry-6b586f9694-rgr2q" [baa76559-de43-4366-a7af-949a6b1936c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 07:26:24.550159  379535 system_pods.go:89] "registry-creds-764b6fb674-7pk58" [818632b4-4e74-4c94-82c0-6e672524abcb] Pending
	I1210 07:26:24.550178  379535 system_pods.go:89] "registry-proxy-x77gq" [2e689e98-e3b0-48c0-9457-3d5b5f99e3df] Pending
	I1210 07:26:24.550182  379535 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9c9b8" [bb02026d-9028-430d-8610-4b6bb683900c] Pending
	I1210 07:26:24.550194  379535 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p5w2h" [a7bf4d61-ab59-4701-aaa4-0af2845b0906] Pending
	I1210 07:26:24.550198  379535 system_pods.go:89] "storage-provisioner" [c8c5b10c-5e2d-457e-8703-b06f0ac1ea85] Pending
	I1210 07:26:24.550214  379535 retry.go:31] will retry after 209.733003ms: missing components: kube-dns
	I1210 07:26:24.582574  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:24.805578  379535 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 07:26:24.805650  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:24.812816  379535 system_pods.go:86] 19 kube-system pods found
	I1210 07:26:24.812899  379535 system_pods.go:89] "coredns-66bc5c9577-4tklf" [7ab27a4b-c0cb-4297-82f8-c643698c8a55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:26:24.812923  379535 system_pods.go:89] "csi-hostpath-attacher-0" [1524b246-3702-4d62-9e4f-f22ed995293b] Pending
	I1210 07:26:24.812947  379535 system_pods.go:89] "csi-hostpath-resizer-0" [b9128a18-cb5f-4edb-8b6e-4ed22a1b87d4] Pending
	I1210 07:26:24.812981  379535 system_pods.go:89] "csi-hostpathplugin-bmkhb" [d1ebcc23-869d-4c06-8782-2482f30c7f7d] Pending
	I1210 07:26:24.813000  379535 system_pods.go:89] "etcd-addons-054300" [42d2bb8e-456d-4b5a-9b2f-87360b75f14d] Running
	I1210 07:26:24.813019  379535 system_pods.go:89] "kindnet-b47q8" [7e7edb78-2cfe-452f-b21c-fa34d8218fd7] Running
	I1210 07:26:24.813037  379535 system_pods.go:89] "kube-apiserver-addons-054300" [bf33d55e-6624-432d-9608-557cb01aa3fe] Running
	I1210 07:26:24.813067  379535 system_pods.go:89] "kube-controller-manager-addons-054300" [38540bf7-8aa3-4aeb-99bd-83b46ba127de] Running
	I1210 07:26:24.813089  379535 system_pods.go:89] "kube-ingress-dns-minikube" [fff30e1b-9736-4f98-b368-35e6ca0b24c2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 07:26:24.813108  379535 system_pods.go:89] "kube-proxy-lt4ld" [3f73faff-6f1d-4fc9-bfc8-42c1263bac55] Running
	I1210 07:26:24.813128  379535 system_pods.go:89] "kube-scheduler-addons-054300" [0c33a60e-04de-44e7-a88d-7d1cfaa2dfae] Running
	I1210 07:26:24.813164  379535 system_pods.go:89] "metrics-server-85b7d694d7-pcvgr" [c9ff9565-6dd1-4ac7-a023-ba2b2e1325bd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 07:26:24.813183  379535 system_pods.go:89] "nvidia-device-plugin-daemonset-jgw4d" [f73a2f9f-873e-49a1-b56a-3b4e891849be] Pending
	I1210 07:26:24.813204  379535 system_pods.go:89] "registry-6b586f9694-rgr2q" [baa76559-de43-4366-a7af-949a6b1936c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 07:26:24.813222  379535 system_pods.go:89] "registry-creds-764b6fb674-7pk58" [818632b4-4e74-4c94-82c0-6e672524abcb] Pending
	I1210 07:26:24.813261  379535 system_pods.go:89] "registry-proxy-x77gq" [2e689e98-e3b0-48c0-9457-3d5b5f99e3df] Pending
	I1210 07:26:24.813281  379535 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9c9b8" [bb02026d-9028-430d-8610-4b6bb683900c] Pending
	I1210 07:26:24.813302  379535 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p5w2h" [a7bf4d61-ab59-4701-aaa4-0af2845b0906] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 07:26:24.813336  379535 system_pods.go:89] "storage-provisioner" [c8c5b10c-5e2d-457e-8703-b06f0ac1ea85] Pending
	I1210 07:26:24.813367  379535 retry.go:31] will retry after 271.44037ms: missing components: kube-dns
	I1210 07:26:24.984193  379535 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 07:26:24.984288  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:24.984264  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:25.074422  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:25.101318  379535 system_pods.go:86] 19 kube-system pods found
	I1210 07:26:25.101417  379535 system_pods.go:89] "coredns-66bc5c9577-4tklf" [7ab27a4b-c0cb-4297-82f8-c643698c8a55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:26:25.101440  379535 system_pods.go:89] "csi-hostpath-attacher-0" [1524b246-3702-4d62-9e4f-f22ed995293b] Pending
	I1210 07:26:25.101462  379535 system_pods.go:89] "csi-hostpath-resizer-0" [b9128a18-cb5f-4edb-8b6e-4ed22a1b87d4] Pending
	I1210 07:26:25.101493  379535 system_pods.go:89] "csi-hostpathplugin-bmkhb" [d1ebcc23-869d-4c06-8782-2482f30c7f7d] Pending
	I1210 07:26:25.101512  379535 system_pods.go:89] "etcd-addons-054300" [42d2bb8e-456d-4b5a-9b2f-87360b75f14d] Running
	I1210 07:26:25.101533  379535 system_pods.go:89] "kindnet-b47q8" [7e7edb78-2cfe-452f-b21c-fa34d8218fd7] Running
	I1210 07:26:25.101552  379535 system_pods.go:89] "kube-apiserver-addons-054300" [bf33d55e-6624-432d-9608-557cb01aa3fe] Running
	I1210 07:26:25.101580  379535 system_pods.go:89] "kube-controller-manager-addons-054300" [38540bf7-8aa3-4aeb-99bd-83b46ba127de] Running
	I1210 07:26:25.101602  379535 system_pods.go:89] "kube-ingress-dns-minikube" [fff30e1b-9736-4f98-b368-35e6ca0b24c2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 07:26:25.101632  379535 system_pods.go:89] "kube-proxy-lt4ld" [3f73faff-6f1d-4fc9-bfc8-42c1263bac55] Running
	I1210 07:26:25.101662  379535 system_pods.go:89] "kube-scheduler-addons-054300" [0c33a60e-04de-44e7-a88d-7d1cfaa2dfae] Running
	I1210 07:26:25.101689  379535 system_pods.go:89] "metrics-server-85b7d694d7-pcvgr" [c9ff9565-6dd1-4ac7-a023-ba2b2e1325bd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 07:26:25.101712  379535 system_pods.go:89] "nvidia-device-plugin-daemonset-jgw4d" [f73a2f9f-873e-49a1-b56a-3b4e891849be] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 07:26:25.101745  379535 system_pods.go:89] "registry-6b586f9694-rgr2q" [baa76559-de43-4366-a7af-949a6b1936c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 07:26:25.101767  379535 system_pods.go:89] "registry-creds-764b6fb674-7pk58" [818632b4-4e74-4c94-82c0-6e672524abcb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 07:26:25.101791  379535 system_pods.go:89] "registry-proxy-x77gq" [2e689e98-e3b0-48c0-9457-3d5b5f99e3df] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 07:26:25.101828  379535 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9c9b8" [bb02026d-9028-430d-8610-4b6bb683900c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 07:26:25.101853  379535 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p5w2h" [a7bf4d61-ab59-4701-aaa4-0af2845b0906] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 07:26:25.101877  379535 system_pods.go:89] "storage-provisioner" [c8c5b10c-5e2d-457e-8703-b06f0ac1ea85] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:26:25.101919  379535 retry.go:31] will retry after 340.731568ms: missing components: kube-dns
	I1210 07:26:25.242498  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:25.477132  379535 system_pods.go:86] 19 kube-system pods found
	I1210 07:26:25.477210  379535 system_pods.go:89] "coredns-66bc5c9577-4tklf" [7ab27a4b-c0cb-4297-82f8-c643698c8a55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:26:25.477232  379535 system_pods.go:89] "csi-hostpath-attacher-0" [1524b246-3702-4d62-9e4f-f22ed995293b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 07:26:25.477255  379535 system_pods.go:89] "csi-hostpath-resizer-0" [b9128a18-cb5f-4edb-8b6e-4ed22a1b87d4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 07:26:25.477293  379535 system_pods.go:89] "csi-hostpathplugin-bmkhb" [d1ebcc23-869d-4c06-8782-2482f30c7f7d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 07:26:25.477319  379535 system_pods.go:89] "etcd-addons-054300" [42d2bb8e-456d-4b5a-9b2f-87360b75f14d] Running
	I1210 07:26:25.477339  379535 system_pods.go:89] "kindnet-b47q8" [7e7edb78-2cfe-452f-b21c-fa34d8218fd7] Running
	I1210 07:26:25.477357  379535 system_pods.go:89] "kube-apiserver-addons-054300" [bf33d55e-6624-432d-9608-557cb01aa3fe] Running
	I1210 07:26:25.477376  379535 system_pods.go:89] "kube-controller-manager-addons-054300" [38540bf7-8aa3-4aeb-99bd-83b46ba127de] Running
	I1210 07:26:25.477409  379535 system_pods.go:89] "kube-ingress-dns-minikube" [fff30e1b-9736-4f98-b368-35e6ca0b24c2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 07:26:25.477437  379535 system_pods.go:89] "kube-proxy-lt4ld" [3f73faff-6f1d-4fc9-bfc8-42c1263bac55] Running
	I1210 07:26:25.477461  379535 system_pods.go:89] "kube-scheduler-addons-054300" [0c33a60e-04de-44e7-a88d-7d1cfaa2dfae] Running
	I1210 07:26:25.477484  379535 system_pods.go:89] "metrics-server-85b7d694d7-pcvgr" [c9ff9565-6dd1-4ac7-a023-ba2b2e1325bd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 07:26:25.477515  379535 system_pods.go:89] "nvidia-device-plugin-daemonset-jgw4d" [f73a2f9f-873e-49a1-b56a-3b4e891849be] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 07:26:25.477549  379535 system_pods.go:89] "registry-6b586f9694-rgr2q" [baa76559-de43-4366-a7af-949a6b1936c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 07:26:25.477569  379535 system_pods.go:89] "registry-creds-764b6fb674-7pk58" [818632b4-4e74-4c94-82c0-6e672524abcb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 07:26:25.477588  379535 system_pods.go:89] "registry-proxy-x77gq" [2e689e98-e3b0-48c0-9457-3d5b5f99e3df] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 07:26:25.477619  379535 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9c9b8" [bb02026d-9028-430d-8610-4b6bb683900c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 07:26:25.477641  379535 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p5w2h" [a7bf4d61-ab59-4701-aaa4-0af2845b0906] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 07:26:25.477662  379535 system_pods.go:89] "storage-provisioner" [c8c5b10c-5e2d-457e-8703-b06f0ac1ea85] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:26:25.477691  379535 retry.go:31] will retry after 396.925776ms: missing components: kube-dns
	I1210 07:26:25.564942  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:25.565686  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:25.567599  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:25.743248  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:25.883810  379535 system_pods.go:86] 19 kube-system pods found
	I1210 07:26:25.883861  379535 system_pods.go:89] "coredns-66bc5c9577-4tklf" [7ab27a4b-c0cb-4297-82f8-c643698c8a55] Running
	I1210 07:26:25.883874  379535 system_pods.go:89] "csi-hostpath-attacher-0" [1524b246-3702-4d62-9e4f-f22ed995293b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 07:26:25.883888  379535 system_pods.go:89] "csi-hostpath-resizer-0" [b9128a18-cb5f-4edb-8b6e-4ed22a1b87d4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 07:26:25.883904  379535 system_pods.go:89] "csi-hostpathplugin-bmkhb" [d1ebcc23-869d-4c06-8782-2482f30c7f7d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 07:26:25.883930  379535 system_pods.go:89] "etcd-addons-054300" [42d2bb8e-456d-4b5a-9b2f-87360b75f14d] Running
	I1210 07:26:25.883946  379535 system_pods.go:89] "kindnet-b47q8" [7e7edb78-2cfe-452f-b21c-fa34d8218fd7] Running
	I1210 07:26:25.883951  379535 system_pods.go:89] "kube-apiserver-addons-054300" [bf33d55e-6624-432d-9608-557cb01aa3fe] Running
	I1210 07:26:25.883956  379535 system_pods.go:89] "kube-controller-manager-addons-054300" [38540bf7-8aa3-4aeb-99bd-83b46ba127de] Running
	I1210 07:26:25.883974  379535 system_pods.go:89] "kube-ingress-dns-minikube" [fff30e1b-9736-4f98-b368-35e6ca0b24c2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 07:26:25.883986  379535 system_pods.go:89] "kube-proxy-lt4ld" [3f73faff-6f1d-4fc9-bfc8-42c1263bac55] Running
	I1210 07:26:25.883992  379535 system_pods.go:89] "kube-scheduler-addons-054300" [0c33a60e-04de-44e7-a88d-7d1cfaa2dfae] Running
	I1210 07:26:25.884005  379535 system_pods.go:89] "metrics-server-85b7d694d7-pcvgr" [c9ff9565-6dd1-4ac7-a023-ba2b2e1325bd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 07:26:25.884020  379535 system_pods.go:89] "nvidia-device-plugin-daemonset-jgw4d" [f73a2f9f-873e-49a1-b56a-3b4e891849be] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 07:26:25.884027  379535 system_pods.go:89] "registry-6b586f9694-rgr2q" [baa76559-de43-4366-a7af-949a6b1936c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 07:26:25.884040  379535 system_pods.go:89] "registry-creds-764b6fb674-7pk58" [818632b4-4e74-4c94-82c0-6e672524abcb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 07:26:25.884047  379535 system_pods.go:89] "registry-proxy-x77gq" [2e689e98-e3b0-48c0-9457-3d5b5f99e3df] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 07:26:25.884058  379535 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9c9b8" [bb02026d-9028-430d-8610-4b6bb683900c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 07:26:25.884080  379535 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p5w2h" [a7bf4d61-ab59-4701-aaa4-0af2845b0906] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 07:26:25.884085  379535 system_pods.go:89] "storage-provisioner" [c8c5b10c-5e2d-457e-8703-b06f0ac1ea85] Running
	I1210 07:26:25.884098  379535 system_pods.go:126] duration metric: took 1.346951034s to wait for k8s-apps to be running ...
	I1210 07:26:25.884111  379535 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 07:26:25.884192  379535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:26:25.904125  379535 system_svc.go:56] duration metric: took 20.003947ms WaitForService to wait for kubelet
	I1210 07:26:25.904165  379535 kubeadm.go:587] duration metric: took 43.154573113s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:26:25.904187  379535 node_conditions.go:102] verifying NodePressure condition ...
	I1210 07:26:25.907800  379535 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1210 07:26:25.907871  379535 node_conditions.go:123] node cpu capacity is 2
	I1210 07:26:25.907900  379535 node_conditions.go:105] duration metric: took 3.702244ms to run NodePressure ...
	I1210 07:26:25.907924  379535 start.go:242] waiting for startup goroutines ...
	I1210 07:26:25.975802  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:25.976158  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:26.067213  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:26.241306  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:26.477102  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:26.477584  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:26.567491  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:26.740422  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:26.976913  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:26.977413  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:27.066776  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:27.241466  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:27.476093  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:27.476538  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:27.566996  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:27.740736  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:27.975397  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:27.976786  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:28.067834  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:28.241029  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:28.476284  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:28.476879  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:28.567309  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:28.740984  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:28.976294  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:28.976689  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:29.067033  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:29.241064  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:29.475139  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:29.475278  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:29.566515  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:29.740851  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:29.975744  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:29.976953  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:30.067857  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:30.240731  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:30.475192  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:30.475468  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:30.566491  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:30.739751  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:30.975392  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:30.975687  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:31.066770  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:31.244038  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:31.475171  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:31.475402  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:31.566515  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:31.739440  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:31.975328  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:31.975434  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:32.066642  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:32.240668  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:32.475086  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:32.475267  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:32.566696  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:32.740263  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:32.976435  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:32.977885  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:33.070502  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:33.240815  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:33.476371  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:33.476485  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:33.567723  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:33.744967  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:33.976697  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:33.977190  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:34.067660  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:34.241708  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:34.476918  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:34.477448  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:34.567636  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:34.742715  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:34.975142  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:34.975677  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:35.067155  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:35.241663  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:35.474960  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:35.475368  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:35.566431  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:35.747921  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:35.977537  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:35.978174  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:36.066751  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:36.240974  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:36.480023  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:36.480673  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:36.566511  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:36.740422  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:36.976243  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:36.976634  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:37.067380  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:37.240341  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:37.476284  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:37.476445  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:37.566168  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:37.740659  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:37.975726  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:37.977103  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:38.072331  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:38.239796  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:38.475679  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:38.476094  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:38.567398  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:38.740111  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:38.974829  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:38.975269  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:39.066523  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:39.240138  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:39.476919  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:39.477394  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:39.566076  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:39.740469  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:39.975070  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:39.975186  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:40.066282  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:40.240700  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:40.475626  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:40.475755  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:40.566978  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:40.740953  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:40.974740  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:40.974932  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:41.066588  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:41.240067  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:41.475266  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:41.475415  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:41.567080  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:41.740482  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:41.974926  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:41.974926  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:42.066938  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:42.240183  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:42.475545  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:42.475776  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:42.573099  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:42.739598  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:42.974915  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:42.974974  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:43.066778  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:43.239916  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:43.476339  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:43.476766  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:43.566861  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:43.741069  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:43.975797  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:43.976020  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:44.067291  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:44.240747  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:44.474887  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:44.475062  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:44.567533  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:44.740416  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:44.975487  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:44.975776  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:45.068203  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:45.240803  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:45.476159  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:45.476610  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:45.566753  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:45.740071  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:45.975132  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:45.975275  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:46.066297  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:46.240669  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:46.476707  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:46.477206  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:46.566498  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:46.740110  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:46.975700  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:46.976423  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:47.066636  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:47.240474  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:47.474416  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:47.474556  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:47.566394  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:47.741320  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:47.974808  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:47.975833  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:48.067272  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:48.239626  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:48.485884  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:48.489903  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:48.571763  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:48.740687  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:48.975588  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:48.975986  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:49.066905  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:49.240433  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:49.477596  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:49.478708  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:49.566835  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:49.740258  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:49.975469  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:49.975650  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:50.066909  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:50.240628  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:50.474872  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:50.475042  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:50.567081  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:50.740674  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:50.974575  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:50.975889  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:51.067871  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:51.240837  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:51.474989  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:51.475355  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:51.566258  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:51.740422  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:51.975283  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:51.975496  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:52.067417  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:52.240079  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:52.474490  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:52.474787  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:52.575443  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:52.739637  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:52.974491  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:52.976628  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:53.066816  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:53.240089  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:53.475785  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:53.476144  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:53.566320  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:53.739799  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:53.976167  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:53.976583  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:54.076270  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:54.240561  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:54.474881  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:54.475183  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:54.566716  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:54.740284  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:54.975671  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:54.975914  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:55.066596  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:55.239541  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:55.477784  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:55.478067  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:55.566700  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:55.739909  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:55.975356  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:55.975779  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:56.066713  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:56.243320  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:56.476612  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:56.477126  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:56.566246  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:56.741116  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:56.976301  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:56.976698  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:57.067169  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:57.240561  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:57.474809  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:57.474884  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:57.566892  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:57.740269  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:57.975089  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:57.975274  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:58.066882  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:58.242967  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:58.475545  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:58.476905  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:58.567597  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:58.778081  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:58.975275  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:58.976422  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:59.066447  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:59.240258  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:59.476728  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:59.476987  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:59.566850  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:59.740975  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:59.975582  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:59.975768  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:00.077105  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:00.244417  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:00.475791  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:00.475914  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:00.566880  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:00.739986  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:00.975770  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:00.977302  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:01.066430  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:01.252209  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:01.476545  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:01.476855  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:01.567204  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:01.741111  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:01.975110  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:01.975319  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:02.075349  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:02.239759  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:02.474279  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:02.474955  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:02.566571  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:02.739818  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:02.975710  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:02.975747  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:03.076454  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:03.240423  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:03.477621  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:03.477962  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:03.567567  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:03.740736  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:03.976525  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:03.977036  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:04.067584  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:04.240413  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:04.476175  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:04.476444  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:04.565917  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:04.740226  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:04.981824  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:04.981962  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:05.078814  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:05.240293  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:05.478662  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:05.479796  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:05.566532  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:05.739932  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:05.983198  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:05.983898  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:06.067382  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:06.241099  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:06.476773  379535 kapi.go:107] duration metric: took 1m17.506150645s to wait for kubernetes.io/minikube-addons=registry ...
	I1210 07:27:06.477210  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:06.566714  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:06.741210  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:06.974945  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:07.068444  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:07.239869  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:07.477723  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:07.567248  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:07.740709  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:07.974010  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:08.067488  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:08.239456  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:08.474915  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:08.567277  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:08.739948  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:08.979211  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:09.080572  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:09.240962  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:09.475288  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:09.569585  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:09.746722  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:09.982336  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:10.066150  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:10.241031  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:10.475637  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:10.569166  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:10.742815  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:10.974230  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:11.066428  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:11.240053  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:11.475787  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:11.568145  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:11.741471  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:11.974359  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:12.066434  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:12.240634  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:12.476535  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:12.571813  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:12.740203  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:12.974389  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:13.067128  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:13.240786  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:13.488860  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:13.588661  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:13.739808  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:13.974085  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:14.068584  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:14.240428  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:14.474510  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:14.566547  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:14.740504  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:14.975611  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:15.066937  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:15.240718  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:15.474567  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:15.570343  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:15.740969  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:15.974831  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:16.073489  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:16.240364  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:16.475164  379535 kapi.go:107] duration metric: took 1m27.504458298s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1210 07:27:16.567277  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:16.739743  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:17.066987  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:17.241846  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:17.574305  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:17.740955  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:18.067157  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:18.240755  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:18.567312  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:18.740214  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:19.067404  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:19.243981  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:19.567373  379535 kapi.go:107] duration metric: took 1m27.004170429s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1210 07:27:19.572599  379535 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-054300 cluster.
	I1210 07:27:19.576091  379535 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1210 07:27:19.579363  379535 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1210 07:27:19.740796  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:20.241297  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:20.739866  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:21.240831  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:21.744453  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:22.240395  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:22.740156  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:23.240187  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:23.740885  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:24.241232  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:24.744571  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:25.239881  379535 kapi.go:107] duration metric: took 1m36.003467716s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1210 07:27:25.244673  379535 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, inspektor-gadget, registry-creds, storage-provisioner, cloud-spanner, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1210 07:27:25.247471  379535 addons.go:530] duration metric: took 1m42.496824303s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin inspektor-gadget registry-creds storage-provisioner cloud-spanner ingress-dns metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1210 07:27:25.247563  379535 start.go:247] waiting for cluster config update ...
	I1210 07:27:25.247651  379535 start.go:256] writing updated cluster config ...
	I1210 07:27:25.247982  379535 ssh_runner.go:195] Run: rm -f paused
	I1210 07:27:25.253264  379535 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:27:25.256976  379535 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4tklf" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:25.263102  379535 pod_ready.go:94] pod "coredns-66bc5c9577-4tklf" is "Ready"
	I1210 07:27:25.263140  379535 pod_ready.go:86] duration metric: took 6.134904ms for pod "coredns-66bc5c9577-4tklf" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:25.266343  379535 pod_ready.go:83] waiting for pod "etcd-addons-054300" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:25.270954  379535 pod_ready.go:94] pod "etcd-addons-054300" is "Ready"
	I1210 07:27:25.271005  379535 pod_ready.go:86] duration metric: took 4.632754ms for pod "etcd-addons-054300" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:25.273634  379535 pod_ready.go:83] waiting for pod "kube-apiserver-addons-054300" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:25.278431  379535 pod_ready.go:94] pod "kube-apiserver-addons-054300" is "Ready"
	I1210 07:27:25.278461  379535 pod_ready.go:86] duration metric: took 4.802847ms for pod "kube-apiserver-addons-054300" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:25.281119  379535 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-054300" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:25.657638  379535 pod_ready.go:94] pod "kube-controller-manager-addons-054300" is "Ready"
	I1210 07:27:25.657668  379535 pod_ready.go:86] duration metric: took 376.524925ms for pod "kube-controller-manager-addons-054300" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:25.858038  379535 pod_ready.go:83] waiting for pod "kube-proxy-lt4ld" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:26.257439  379535 pod_ready.go:94] pod "kube-proxy-lt4ld" is "Ready"
	I1210 07:27:26.257474  379535 pod_ready.go:86] duration metric: took 399.407812ms for pod "kube-proxy-lt4ld" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:26.457886  379535 pod_ready.go:83] waiting for pod "kube-scheduler-addons-054300" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:26.857737  379535 pod_ready.go:94] pod "kube-scheduler-addons-054300" is "Ready"
	I1210 07:27:26.857768  379535 pod_ready.go:86] duration metric: took 399.813879ms for pod "kube-scheduler-addons-054300" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:26.857781  379535 pod_ready.go:40] duration metric: took 1.604474091s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:27:26.917742  379535 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1210 07:27:26.921655  379535 out.go:179] * Done! kubectl is now configured to use "addons-054300" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 07:30:49 addons-054300 crio[826]: time="2025-12-10T07:30:49.104923346Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=3c5975df-9bf1-46ae-a3e9-6153e7db0d8d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:30:49 addons-054300 crio[826]: time="2025-12-10T07:30:49.108797012Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=bf5aa728-f51f-456b-951e-12a645ff192e name=/runtime.v1.ImageService/PullImage
	Dec 10 07:30:49 addons-054300 crio[826]: time="2025-12-10T07:30:49.111887861Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 10 07:30:49 addons-054300 crio[826]: time="2025-12-10T07:30:49.722038838Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=bf5aa728-f51f-456b-951e-12a645ff192e name=/runtime.v1.ImageService/PullImage
	Dec 10 07:30:49 addons-054300 crio[826]: time="2025-12-10T07:30:49.722816034Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=6ce7c564-1212-4610-b3c4-a0dba24fe13c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:30:49 addons-054300 crio[826]: time="2025-12-10T07:30:49.725020959Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=ee973af8-4517-4e37-94d2-c71a9257d907 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:30:49 addons-054300 crio[826]: time="2025-12-10T07:30:49.739431208Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-kn7ft/hello-world-app" id=f8a1949c-c143-4272-a60f-24f72504ec30 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 07:30:49 addons-054300 crio[826]: time="2025-12-10T07:30:49.739765939Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 07:30:49 addons-054300 crio[826]: time="2025-12-10T07:30:49.757043428Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 07:30:49 addons-054300 crio[826]: time="2025-12-10T07:30:49.757291438Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/abda7bcff3670305eb863a8a07ff18dc7619b75fba4a8521cae3c2a7891ffbf1/merged/etc/passwd: no such file or directory"
	Dec 10 07:30:49 addons-054300 crio[826]: time="2025-12-10T07:30:49.757325621Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/abda7bcff3670305eb863a8a07ff18dc7619b75fba4a8521cae3c2a7891ffbf1/merged/etc/group: no such file or directory"
	Dec 10 07:30:49 addons-054300 crio[826]: time="2025-12-10T07:30:49.757570218Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 07:30:49 addons-054300 crio[826]: time="2025-12-10T07:30:49.774431047Z" level=info msg="Created container cc4090af1064abe4593373c78d13f70f87b5abf7e10fe7cd447967a8149b2b6c: default/hello-world-app-5d498dc89-kn7ft/hello-world-app" id=f8a1949c-c143-4272-a60f-24f72504ec30 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 07:30:49 addons-054300 crio[826]: time="2025-12-10T07:30:49.778909666Z" level=info msg="Starting container: cc4090af1064abe4593373c78d13f70f87b5abf7e10fe7cd447967a8149b2b6c" id=9f6d5482-a614-424e-a59a-cc2ec713c541 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 07:30:49 addons-054300 crio[826]: time="2025-12-10T07:30:49.78375116Z" level=info msg="Started container" PID=7067 containerID=cc4090af1064abe4593373c78d13f70f87b5abf7e10fe7cd447967a8149b2b6c description=default/hello-world-app-5d498dc89-kn7ft/hello-world-app id=9f6d5482-a614-424e-a59a-cc2ec713c541 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c834f745d24a868f98d441ee5dd181a7611600ea5ead89e151a47aff62852cc9
	Dec 10 07:30:50 addons-054300 crio[826]: time="2025-12-10T07:30:50.365202347Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=08c2338f-2900-4e73-95a9-ecd24ad4b0ce name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:30:50 addons-054300 crio[826]: time="2025-12-10T07:30:50.367392626Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=8ddb71d5-69ca-402a-bf9c-02db62483329 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:30:50 addons-054300 crio[826]: time="2025-12-10T07:30:50.370483869Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-7pk58/registry-creds" id=d0cf470a-dcf2-4418-a2c1-306aa060cb68 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 07:30:50 addons-054300 crio[826]: time="2025-12-10T07:30:50.370609901Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 07:30:50 addons-054300 crio[826]: time="2025-12-10T07:30:50.387506037Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 07:30:50 addons-054300 crio[826]: time="2025-12-10T07:30:50.393326123Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 07:30:50 addons-054300 crio[826]: time="2025-12-10T07:30:50.420311814Z" level=info msg="Created container 85114969bcff21243ab3358a73a3906a47130df82e3ebc585bcc4d4188c56cf1: kube-system/registry-creds-764b6fb674-7pk58/registry-creds" id=d0cf470a-dcf2-4418-a2c1-306aa060cb68 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 07:30:50 addons-054300 crio[826]: time="2025-12-10T07:30:50.422832435Z" level=info msg="Starting container: 85114969bcff21243ab3358a73a3906a47130df82e3ebc585bcc4d4188c56cf1" id=74d7841f-c17b-4bbe-ae7f-2fbd3f8e5b49 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 07:30:50 addons-054300 crio[826]: time="2025-12-10T07:30:50.426426862Z" level=info msg="Started container" PID=7149 containerID=85114969bcff21243ab3358a73a3906a47130df82e3ebc585bcc4d4188c56cf1 description=kube-system/registry-creds-764b6fb674-7pk58/registry-creds id=74d7841f-c17b-4bbe-ae7f-2fbd3f8e5b49 name=/runtime.v1.RuntimeService/StartContainer sandboxID=263767cdf33dcc83f2e1241c1a5895b68221bfc40289b422e1a59cb3ef83c53e
	Dec 10 07:30:50 addons-054300 conmon[7147]: conmon 85114969bcff21243ab3 <ninfo>: container 7149 exited with status 1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	85114969bcff2       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             Less than a second ago   Exited              registry-creds                           2                   263767cdf33dc       registry-creds-764b6fb674-7pk58             kube-system
	cc4090af1064a       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   c834f745d24a8       hello-world-app-5d498dc89-kn7ft             default
	1648f8705ee52       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             13 seconds ago           Exited              registry-creds                           1                   263767cdf33dc       registry-creds-764b6fb674-7pk58             kube-system
	d610a4c0d2718       cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1                                                                             2 minutes ago            Running             nginx                                    0                   b0c7d0ad830da       nginx                                       default
	5179e9b41e4bd       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   ae5bd9c38d50c       busybox                                     default
	93c6c5614c0a9       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   df048affbebe5       csi-hostpathplugin-bmkhb                    kube-system
	2f1be47dacec8       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   df048affbebe5       csi-hostpathplugin-bmkhb                    kube-system
	c796a7524dca5       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   df048affbebe5       csi-hostpathplugin-bmkhb                    kube-system
	31f1c3a5096a8       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   df048affbebe5       csi-hostpathplugin-bmkhb                    kube-system
	7eebbffe34f04       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   df048affbebe5       csi-hostpathplugin-bmkhb                    kube-system
	f3fe35bc7c9db       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   3be6747a9fc1d       gcp-auth-78565c9fb4-ws495                   gcp-auth
	cc2386e1e7501       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             3 minutes ago            Running             controller                               0                   8bacb1dc63a85       ingress-nginx-controller-85d4c799dd-htr8f   ingress-nginx
	ae6cef0f10ca5       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            3 minutes ago            Running             gadget                                   0                   b036a941462e5       gadget-rhzvh                                gadget
	7513d10c4f49b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   506ef510f45b5       registry-proxy-x77gq                        kube-system
	5668f24e462ca       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   0a1fd72a315c4       nvidia-device-plugin-daemonset-jgw4d        kube-system
	0b8ebb10e62eb       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             3 minutes ago            Exited              patch                                    2                   03ca060766dcd       ingress-nginx-admission-patch-tlj69         ingress-nginx
	03d15632d2655       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   df048affbebe5       csi-hostpathplugin-bmkhb                    kube-system
	066dde63f910d       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   b8c927998cde5       csi-hostpath-attacher-0                     kube-system
	607af77f9ee4c       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   f74deb4fd23cd       csi-hostpath-resizer-0                      kube-system
	de2062e9f92ec       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   3 minutes ago            Exited              create                                   0                   7a03a57b2de9b       ingress-nginx-admission-create-5dvp6        ingress-nginx
	119a600e8cd24       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   e8eed067ab62c       yakd-dashboard-5ff678cb9-r7798              yakd-dashboard
	b99ece48f039a       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   1ebf01df32e18       snapshot-controller-7d9fbc56b8-9c9b8        kube-system
	44b34e0a56c70       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   6d3667317c1ba       snapshot-controller-7d9fbc56b8-p5w2h        kube-system
	2ba3f318a735a       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               4 minutes ago            Running             cloud-spanner-emulator                   0                   94d97ee14a0aa       cloud-spanner-emulator-5bdddb765-59bbv      default
	a21d19bf0c7d1       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   9fea308758c43       local-path-provisioner-648f6765c9-vwdrf     local-path-storage
	b51e7b373a333       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago            Running             registry                                 0                   19e12fc266ce4       registry-6b586f9694-rgr2q                   kube-system
	4c160e32d403d       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   58f37a8226973       metrics-server-85b7d694d7-pcvgr             kube-system
	ea57c044de907       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago            Running             minikube-ingress-dns                     0                   54dfc2e7a3622       kube-ingress-dns-minikube                   kube-system
	010ebc9ab887d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   38cf39d2254a6       coredns-66bc5c9577-4tklf                    kube-system
	bf6af03dc7508       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   86004d049827d       storage-provisioner                         kube-system
	cd4a11fe27652       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago            Running             kindnet-cni                              0                   a02f0c796fcb7       kindnet-b47q8                               kube-system
	423282b955e32       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             5 minutes ago            Running             kube-proxy                               0                   e9ba591950e5e       kube-proxy-lt4ld                            kube-system
	6f7abeab2dc46       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             5 minutes ago            Running             kube-controller-manager                  0                   e57f1308f317f       kube-controller-manager-addons-054300       kube-system
	7e676b17ce03a       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             5 minutes ago            Running             etcd                                     0                   c1056718c8e52       etcd-addons-054300                          kube-system
	f47e61eab5569       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             5 minutes ago            Running             kube-apiserver                           0                   c0e60f07997fd       kube-apiserver-addons-054300                kube-system
	6f9a84d527a09       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             5 minutes ago            Running             kube-scheduler                           0                   b618244dd39f3       kube-scheduler-addons-054300                kube-system
	
	
	==> coredns [010ebc9ab887dac18a762a98373849d39c683302618e70cd88a2ca7e38a5535f] <==
	[INFO] 10.244.0.17:38351 - 45103 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002102081s
	[INFO] 10.244.0.17:38351 - 59217 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000238714s
	[INFO] 10.244.0.17:38351 - 15747 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000149663s
	[INFO] 10.244.0.17:40308 - 19172 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000165441s
	[INFO] 10.244.0.17:40308 - 18942 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000207198s
	[INFO] 10.244.0.17:46740 - 46626 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000141548s
	[INFO] 10.244.0.17:46740 - 47087 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00015891s
	[INFO] 10.244.0.17:53160 - 56143 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000133138s
	[INFO] 10.244.0.17:53160 - 55963 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000146553s
	[INFO] 10.244.0.17:35751 - 63766 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001679916s
	[INFO] 10.244.0.17:35751 - 63569 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001837308s
	[INFO] 10.244.0.17:35452 - 48149 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000142155s
	[INFO] 10.244.0.17:35452 - 47960 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000090553s
	[INFO] 10.244.0.21:57352 - 21832 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000181524s
	[INFO] 10.244.0.21:60969 - 44327 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000345619s
	[INFO] 10.244.0.21:47451 - 15684 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000110433s
	[INFO] 10.244.0.21:39403 - 32649 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000243973s
	[INFO] 10.244.0.21:54234 - 8527 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000140719s
	[INFO] 10.244.0.21:39063 - 38368 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109293s
	[INFO] 10.244.0.21:59778 - 15964 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00246672s
	[INFO] 10.244.0.21:57912 - 14364 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002261418s
	[INFO] 10.244.0.21:53530 - 36713 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001420263s
	[INFO] 10.244.0.21:51043 - 32048 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003380804s
	[INFO] 10.244.0.23:35963 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000144305s
	[INFO] 10.244.0.23:38505 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000105987s
	
	
	==> describe nodes <==
	Name:               addons-054300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-054300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=addons-054300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T07_25_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-054300
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-054300"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 07:25:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-054300
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 07:30:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 07:30:44 +0000   Wed, 10 Dec 2025 07:25:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 07:30:44 +0000   Wed, 10 Dec 2025 07:25:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 07:30:44 +0000   Wed, 10 Dec 2025 07:25:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 07:30:44 +0000   Wed, 10 Dec 2025 07:26:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-054300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 0bfdf75342fda7ce4dcc05536938a4f8
	  System UUID:                9c598586-ae7f-4553-b778-30f36bc21e4b
	  Boot ID:                    9ae06026-ffc7-4eb4-912b-d54adcad0f66
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  default                     cloud-spanner-emulator-5bdddb765-59bbv       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  default                     hello-world-app-5d498dc89-kn7ft              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  gadget                      gadget-rhzvh                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  gcp-auth                    gcp-auth-78565c9fb4-ws495                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-htr8f    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m2s
	  kube-system                 coredns-66bc5c9577-4tklf                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m8s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 csi-hostpathplugin-bmkhb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 etcd-addons-054300                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m13s
	  kube-system                 kindnet-b47q8                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m8s
	  kube-system                 kube-apiserver-addons-054300                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-controller-manager-addons-054300        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-proxy-lt4ld                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-scheduler-addons-054300                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 metrics-server-85b7d694d7-pcvgr              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m3s
	  kube-system                 nvidia-device-plugin-daemonset-jgw4d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 registry-6b586f9694-rgr2q                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 registry-creds-764b6fb674-7pk58              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 registry-proxy-x77gq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 snapshot-controller-7d9fbc56b8-9c9b8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 snapshot-controller-7d9fbc56b8-p5w2h         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  local-path-storage          local-path-provisioner-648f6765c9-vwdrf      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-r7798               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 5m6s   kube-proxy       
	  Normal   Starting                 5m13s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m13s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m13s  kubelet          Node addons-054300 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m13s  kubelet          Node addons-054300 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m13s  kubelet          Node addons-054300 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m9s   node-controller  Node addons-054300 event: Registered Node addons-054300 in Controller
	  Normal   NodeReady                4m26s  kubelet          Node addons-054300 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	[Dec10 07:11] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:23] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:25] overlayfs: idmapped layers are currently not supported
	[  +0.065949] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [7e676b17ce03a2d6e29e4632f9bf789ed23b923d4e1656d76c8dc18d9a19f765] <==
	{"level":"warn","ts":"2025-12-10T07:25:32.866955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:32.880694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:32.897992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:32.923591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:32.961811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:32.989738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.029809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.056867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.089866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.119478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.136671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.181146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.206582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.229221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.267446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.313920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.340285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.365093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.524890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:49.607184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:49.619106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:26:11.501987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:26:11.512040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:26:11.540468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:26:11.555215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51118","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [f3fe35bc7c9db5bb24f28d54b1bbac54bd9481d9d194946197ad5c03255b6594] <==
	2025/12/10 07:27:18 GCP Auth Webhook started!
	2025/12/10 07:27:27 Ready to marshal response ...
	2025/12/10 07:27:27 Ready to write response ...
	2025/12/10 07:27:27 Ready to marshal response ...
	2025/12/10 07:27:27 Ready to write response ...
	2025/12/10 07:27:27 Ready to marshal response ...
	2025/12/10 07:27:27 Ready to write response ...
	2025/12/10 07:27:46 Ready to marshal response ...
	2025/12/10 07:27:46 Ready to write response ...
	2025/12/10 07:27:55 Ready to marshal response ...
	2025/12/10 07:27:55 Ready to write response ...
	2025/12/10 07:28:04 Ready to marshal response ...
	2025/12/10 07:28:04 Ready to write response ...
	2025/12/10 07:28:04 Ready to marshal response ...
	2025/12/10 07:28:04 Ready to write response ...
	2025/12/10 07:28:12 Ready to marshal response ...
	2025/12/10 07:28:12 Ready to write response ...
	2025/12/10 07:28:19 Ready to marshal response ...
	2025/12/10 07:28:19 Ready to write response ...
	2025/12/10 07:28:29 Ready to marshal response ...
	2025/12/10 07:28:29 Ready to write response ...
	2025/12/10 07:30:48 Ready to marshal response ...
	2025/12/10 07:30:48 Ready to write response ...
	
	
	==> kernel <==
	 07:30:50 up  2:13,  0 user,  load average: 0.43, 1.68, 1.74
	Linux addons-054300 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cd4a11fe2765202ff0a3776d921d19a73423afa2a34ebc3326d6039c5e6a30d7] <==
	I1210 07:28:43.718340       1 main.go:301] handling current node
	I1210 07:28:53.724521       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:28:53.724555       1 main.go:301] handling current node
	I1210 07:29:03.725209       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:29:03.725242       1 main.go:301] handling current node
	I1210 07:29:13.718058       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:29:13.718090       1 main.go:301] handling current node
	I1210 07:29:23.723254       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:29:23.723288       1 main.go:301] handling current node
	I1210 07:29:33.725226       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:29:33.725258       1 main.go:301] handling current node
	I1210 07:29:43.725984       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:29:43.726128       1 main.go:301] handling current node
	I1210 07:29:53.717372       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:29:53.717481       1 main.go:301] handling current node
	I1210 07:30:03.717174       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:30:03.717293       1 main.go:301] handling current node
	I1210 07:30:13.725196       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:30:13.725230       1 main.go:301] handling current node
	I1210 07:30:23.723110       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:30:23.723146       1 main.go:301] handling current node
	I1210 07:30:33.720373       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:30:33.720404       1 main.go:301] handling current node
	I1210 07:30:43.719886       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:30:43.720002       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f47e61eab5569fd31608bd377428e8097fb04caa8d1759ffcd5dafe6a7257e83] <==
	W1210 07:26:11.495930       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1210 07:26:11.512066       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1210 07:26:11.539829       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1210 07:26:11.555278       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1210 07:26:24.351174       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.154.198:443: connect: connection refused
	E1210 07:26:24.351291       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.154.198:443: connect: connection refused" logger="UnhandledError"
	W1210 07:26:24.351872       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.154.198:443: connect: connection refused
	E1210 07:26:24.351915       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.154.198:443: connect: connection refused" logger="UnhandledError"
	W1210 07:26:24.495647       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.154.198:443: connect: connection refused
	E1210 07:26:24.495778       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.154.198:443: connect: connection refused" logger="UnhandledError"
	E1210 07:26:37.807314       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.255.146:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.255.146:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.255.146:443: connect: connection refused" logger="UnhandledError"
	W1210 07:26:37.807864       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 07:26:37.809124       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1210 07:26:37.845069       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1210 07:26:37.861399       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1210 07:27:35.976399       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34096: use of closed network connection
	I1210 07:28:04.166379       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1210 07:28:06.314681       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1210 07:28:27.730034       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1210 07:28:29.012493       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1210 07:28:29.313983       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.122.176"}
	I1210 07:30:48.920987       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.85.80"}
	
	
	==> kube-controller-manager [6f7abeab2dc46acc945548fb9227342447b975273c96c12c95c3030a8fe43009] <==
	I1210 07:25:41.524975       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 07:25:41.525012       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 07:25:41.525017       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 07:25:41.525346       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 07:25:41.525397       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 07:25:41.525856       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1210 07:25:41.528463       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 07:25:41.528650       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1210 07:25:41.531097       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1210 07:25:41.531107       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 07:25:41.531122       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 07:25:41.532682       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 07:25:41.543116       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 07:25:41.543143       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 07:25:41.543150       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1210 07:25:47.776407       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1210 07:25:47.794157       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1210 07:26:11.488441       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 07:26:11.488641       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1210 07:26:11.488690       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1210 07:26:11.521455       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1210 07:26:11.527538       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1210 07:26:11.589579       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 07:26:11.628162       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 07:26:26.489695       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [423282b955e322a79f61f0e6396aaf4856be9146d169ba0017ff6c9344317365] <==
	I1210 07:25:43.667383       1 server_linux.go:53] "Using iptables proxy"
	I1210 07:25:43.760735       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 07:25:43.869049       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 07:25:43.869088       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1210 07:25:43.869169       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 07:25:43.903818       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 07:25:43.903872       1 server_linux.go:132] "Using iptables Proxier"
	I1210 07:25:43.917270       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 07:25:43.921159       1 server.go:527] "Version info" version="v1.34.2"
	I1210 07:25:43.921195       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 07:25:43.925351       1 config.go:200] "Starting service config controller"
	I1210 07:25:43.925366       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 07:25:43.925382       1 config.go:106] "Starting endpoint slice config controller"
	I1210 07:25:43.925386       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 07:25:43.925397       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 07:25:43.925401       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 07:25:43.925986       1 config.go:309] "Starting node config controller"
	I1210 07:25:43.925993       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 07:25:43.926000       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 07:25:44.025893       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 07:25:44.025933       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 07:25:44.025978       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6f9a84d527a09dd38aac5cf60b6359dfddc676ea260108b8f4f852340f51772f] <==
	E1210 07:25:34.614859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 07:25:34.618611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 07:25:34.623341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1210 07:25:34.623646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 07:25:34.623697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 07:25:34.623743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 07:25:34.623788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 07:25:34.623826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 07:25:34.623857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 07:25:34.623977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 07:25:34.624013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 07:25:34.624048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 07:25:35.424263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 07:25:35.434462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 07:25:35.455846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 07:25:35.458298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 07:25:35.475080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 07:25:35.508974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 07:25:35.521913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 07:25:35.645424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 07:25:35.656772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 07:25:35.758413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 07:25:35.823373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 07:25:36.140962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1210 07:25:37.903077       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 07:29:11 addons-054300 kubelet[1267]: I1210 07:29:11.362085    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-rgr2q" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 07:29:30 addons-054300 kubelet[1267]: I1210 07:29:30.361880    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-x77gq" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 07:29:34 addons-054300 kubelet[1267]: I1210 07:29:34.362259    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-jgw4d" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 07:30:34 addons-054300 kubelet[1267]: I1210 07:30:34.761651    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7pk58" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 07:30:36 addons-054300 kubelet[1267]: I1210 07:30:36.859888    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7pk58" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 07:30:36 addons-054300 kubelet[1267]: I1210 07:30:36.860392    1267 scope.go:117] "RemoveContainer" containerID="36dd116c16d3111552e66f2c1cb29f20b795ccdce11415384aed8b004b63a9c2"
	Dec 10 07:30:37 addons-054300 kubelet[1267]: I1210 07:30:37.680366    1267 scope.go:117] "RemoveContainer" containerID="36dd116c16d3111552e66f2c1cb29f20b795ccdce11415384aed8b004b63a9c2"
	Dec 10 07:30:37 addons-054300 kubelet[1267]: I1210 07:30:37.865876    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7pk58" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 07:30:37 addons-054300 kubelet[1267]: I1210 07:30:37.865929    1267 scope.go:117] "RemoveContainer" containerID="1648f8705ee52192bd72ce1a3677231db1e57dac9d1ea5d6413456bb1fc21380"
	Dec 10 07:30:37 addons-054300 kubelet[1267]: E1210 07:30:37.866073    1267 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-7pk58_kube-system(818632b4-4e74-4c94-82c0-6e672524abcb)\"" pod="kube-system/registry-creds-764b6fb674-7pk58" podUID="818632b4-4e74-4c94-82c0-6e672524abcb"
	Dec 10 07:30:38 addons-054300 kubelet[1267]: I1210 07:30:38.869474    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7pk58" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 07:30:38 addons-054300 kubelet[1267]: I1210 07:30:38.869958    1267 scope.go:117] "RemoveContainer" containerID="1648f8705ee52192bd72ce1a3677231db1e57dac9d1ea5d6413456bb1fc21380"
	Dec 10 07:30:38 addons-054300 kubelet[1267]: E1210 07:30:38.870505    1267 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-7pk58_kube-system(818632b4-4e74-4c94-82c0-6e672524abcb)\"" pod="kube-system/registry-creds-764b6fb674-7pk58" podUID="818632b4-4e74-4c94-82c0-6e672524abcb"
	Dec 10 07:30:40 addons-054300 kubelet[1267]: I1210 07:30:40.362324    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-x77gq" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 07:30:41 addons-054300 kubelet[1267]: I1210 07:30:41.362168    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-rgr2q" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 07:30:48 addons-054300 kubelet[1267]: I1210 07:30:48.811316    1267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/376e4993-be61-46be-8003-487823e60021-gcp-creds\") pod \"hello-world-app-5d498dc89-kn7ft\" (UID: \"376e4993-be61-46be-8003-487823e60021\") " pod="default/hello-world-app-5d498dc89-kn7ft"
	Dec 10 07:30:48 addons-054300 kubelet[1267]: I1210 07:30:48.811918    1267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm9ts\" (UniqueName: \"kubernetes.io/projected/376e4993-be61-46be-8003-487823e60021-kube-api-access-pm9ts\") pod \"hello-world-app-5d498dc89-kn7ft\" (UID: \"376e4993-be61-46be-8003-487823e60021\") " pod="default/hello-world-app-5d498dc89-kn7ft"
	Dec 10 07:30:49 addons-054300 kubelet[1267]: W1210 07:30:49.100162    1267 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dc22f5170a29040c792fbce50e00f81449bfae5eee3653f8df540be03d7de55a/crio-c834f745d24a868f98d441ee5dd181a7611600ea5ead89e151a47aff62852cc9 WatchSource:0}: Error finding container c834f745d24a868f98d441ee5dd181a7611600ea5ead89e151a47aff62852cc9: Status 404 returned error can't find the container with id c834f745d24a868f98d441ee5dd181a7611600ea5ead89e151a47aff62852cc9
	Dec 10 07:30:49 addons-054300 kubelet[1267]: I1210 07:30:49.923914    1267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-kn7ft" podStartSLOduration=1.305217241 podStartE2EDuration="1.923884615s" podCreationTimestamp="2025-12-10 07:30:48 +0000 UTC" firstStartedPulling="2025-12-10 07:30:49.105229654 +0000 UTC m=+311.861656138" lastFinishedPulling="2025-12-10 07:30:49.723897028 +0000 UTC m=+312.480323512" observedRunningTime="2025-12-10 07:30:49.922443077 +0000 UTC m=+312.678869569" watchObservedRunningTime="2025-12-10 07:30:49.923884615 +0000 UTC m=+312.680311107"
	Dec 10 07:30:50 addons-054300 kubelet[1267]: I1210 07:30:50.361951    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7pk58" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 07:30:50 addons-054300 kubelet[1267]: I1210 07:30:50.362022    1267 scope.go:117] "RemoveContainer" containerID="1648f8705ee52192bd72ce1a3677231db1e57dac9d1ea5d6413456bb1fc21380"
	Dec 10 07:30:50 addons-054300 kubelet[1267]: I1210 07:30:50.936029    1267 scope.go:117] "RemoveContainer" containerID="1648f8705ee52192bd72ce1a3677231db1e57dac9d1ea5d6413456bb1fc21380"
	Dec 10 07:30:50 addons-054300 kubelet[1267]: I1210 07:30:50.937498    1267 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-7pk58" secret="" err="secret \"gcp-auth\" not found"
	Dec 10 07:30:50 addons-054300 kubelet[1267]: I1210 07:30:50.937554    1267 scope.go:117] "RemoveContainer" containerID="85114969bcff21243ab3358a73a3906a47130df82e3ebc585bcc4d4188c56cf1"
	Dec 10 07:30:50 addons-054300 kubelet[1267]: E1210 07:30:50.937760    1267 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-7pk58_kube-system(818632b4-4e74-4c94-82c0-6e672524abcb)\"" pod="kube-system/registry-creds-764b6fb674-7pk58" podUID="818632b4-4e74-4c94-82c0-6e672524abcb"
	
	
	==> storage-provisioner [bf6af03dc750859dc6c94c9e6d82d48b4876eee71ec2028f9a291deb7578562c] <==
	W1210 07:30:26.686712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:28.690161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:28.696749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:30.699876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:30.705389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:32.708389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:32.712620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:34.715532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:34.722497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:36.725743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:36.730348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:38.733539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:38.738294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:40.741939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:40.746813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:42.750372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:42.755649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:44.758461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:44.762602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:46.765638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:46.770076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:48.790810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:48.851332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:50.854871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:30:50.860405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-054300 -n addons-054300
helpers_test.go:270: (dbg) Run:  kubectl --context addons-054300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-5dvp6 ingress-nginx-admission-patch-tlj69
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-054300 describe pod ingress-nginx-admission-create-5dvp6 ingress-nginx-admission-patch-tlj69
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-054300 describe pod ingress-nginx-admission-create-5dvp6 ingress-nginx-admission-patch-tlj69: exit status 1 (77.814917ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-5dvp6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tlj69" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-054300 describe pod ingress-nginx-admission-create-5dvp6 ingress-nginx-admission-patch-tlj69: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054300 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054300 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (292.383692ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:30:52.046181  389053 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:30:52.047000  389053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:30:52.047073  389053 out.go:374] Setting ErrFile to fd 2...
	I1210 07:30:52.047096  389053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:30:52.047967  389053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:30:52.048414  389053 mustload.go:66] Loading cluster: addons-054300
	I1210 07:30:52.048903  389053 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:30:52.048927  389053 addons.go:622] checking whether the cluster is paused
	I1210 07:30:52.049077  389053 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:30:52.049097  389053 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:30:52.049761  389053 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:30:52.074023  389053 ssh_runner.go:195] Run: systemctl --version
	I1210 07:30:52.074085  389053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:30:52.096761  389053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:30:52.205746  389053 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:30:52.205853  389053 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:30:52.237296  389053 cri.go:89] found id: "85114969bcff21243ab3358a73a3906a47130df82e3ebc585bcc4d4188c56cf1"
	I1210 07:30:52.237368  389053 cri.go:89] found id: "93c6c5614c0a9a2eb74cabfde767b532e188ff8cd89a94813838268e5804151b"
	I1210 07:30:52.237386  389053 cri.go:89] found id: "2f1be47dacec87c0a56b3cd1eb73c6d247005830f78472193131c56d054bcc4a"
	I1210 07:30:52.237406  389053 cri.go:89] found id: "c796a7524dca5dedee39b3961c5997d3d72f431343a00508663733fc1be6878c"
	I1210 07:30:52.237425  389053 cri.go:89] found id: "31f1c3a5096a8785d7b60bd481c9f3c1cf0709868ce6cb1cc139db75f52c1f42"
	I1210 07:30:52.237461  389053 cri.go:89] found id: "7eebbffe34f04d7ad5bdc3e5c3aef890263f8a90ca947aaeeca6359c36cc1191"
	I1210 07:30:52.237477  389053 cri.go:89] found id: "7513d10c4f49be5d24f91c1199a5db71465badd2bf9a2a1a84bcf9829a3c814e"
	I1210 07:30:52.237494  389053 cri.go:89] found id: "5668f24e462ca639e29b3d82f2cc864344a08ca26a729049642a2924016b13e4"
	I1210 07:30:52.237528  389053 cri.go:89] found id: "03d15632d265591c79e3367a3210fd42b6febf2d5d06f75bf11cf5ba909f8df7"
	I1210 07:30:52.237552  389053 cri.go:89] found id: "066dde63f910d4f04f548debd89f326b5a438ce283264953c1a7666cbeaf01dd"
	I1210 07:30:52.237571  389053 cri.go:89] found id: "607af77f9ee4c14a40adf7d8a6d9170dbdb3216cdeee2e67303dfa6b22ceb2ab"
	I1210 07:30:52.237588  389053 cri.go:89] found id: "b99ece48f039a1adf74d1e66696d0dfe4716da14982d4112abccc91ba77922e1"
	I1210 07:30:52.237617  389053 cri.go:89] found id: "44b34e0a56c7013a6bda4740b0b76cc47606a7950d05f25fc59201a8c1464857"
	I1210 07:30:52.237638  389053 cri.go:89] found id: "b51e7b373a333c5307dd63dd8ebd295dff1e1b749b51e9aaca3e63155b75c735"
	I1210 07:30:52.237656  389053 cri.go:89] found id: "4c160e32d403d540fd0f5076cbbb334cc640351c4da249136985c61c8ddb07f9"
	I1210 07:30:52.237681  389053 cri.go:89] found id: "ea57c044de907492a9a5cd673e06bf24ec59b5559c417648fe898ca3c4fa4e4f"
	I1210 07:30:52.237729  389053 cri.go:89] found id: "010ebc9ab887dac18a762a98373849d39c683302618e70cd88a2ca7e38a5535f"
	I1210 07:30:52.237748  389053 cri.go:89] found id: "bf6af03dc750859dc6c94c9e6d82d48b4876eee71ec2028f9a291deb7578562c"
	I1210 07:30:52.237780  389053 cri.go:89] found id: "cd4a11fe2765202ff0a3776d921d19a73423afa2a34ebc3326d6039c5e6a30d7"
	I1210 07:30:52.237801  389053 cri.go:89] found id: "423282b955e322a79f61f0e6396aaf4856be9146d169ba0017ff6c9344317365"
	I1210 07:30:52.237824  389053 cri.go:89] found id: "6f7abeab2dc46acc945548fb9227342447b975273c96c12c95c3030a8fe43009"
	I1210 07:30:52.237841  389053 cri.go:89] found id: "7e676b17ce03a2d6e29e4632f9bf789ed23b923d4e1656d76c8dc18d9a19f765"
	I1210 07:30:52.237870  389053 cri.go:89] found id: "f47e61eab5569fd31608bd377428e8097fb04caa8d1759ffcd5dafe6a7257e83"
	I1210 07:30:52.237891  389053 cri.go:89] found id: "6f9a84d527a09dd38aac5cf60b6359dfddc676ea260108b8f4f852340f51772f"
	I1210 07:30:52.237909  389053 cri.go:89] found id: ""
	I1210 07:30:52.237992  389053 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 07:30:52.266957  389053 out.go:203] 
	W1210 07:30:52.270042  389053 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:30:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:30:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 07:30:52.270065  389053 out.go:285] * 
	* 
	W1210 07:30:52.275696  389053 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:30:52.278672  389053 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-054300 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054300 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054300 addons disable ingress --alsologtostderr -v=1: exit status 11 (276.822919ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:30:52.337578  389167 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:30:52.338685  389167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:30:52.338740  389167 out.go:374] Setting ErrFile to fd 2...
	I1210 07:30:52.338765  389167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:30:52.339192  389167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:30:52.339634  389167 mustload.go:66] Loading cluster: addons-054300
	I1210 07:30:52.340228  389167 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:30:52.340274  389167 addons.go:622] checking whether the cluster is paused
	I1210 07:30:52.340432  389167 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:30:52.340464  389167 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:30:52.341020  389167 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:30:52.359256  389167 ssh_runner.go:195] Run: systemctl --version
	I1210 07:30:52.359312  389167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:30:52.379548  389167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:30:52.497648  389167 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:30:52.497761  389167 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:30:52.528505  389167 cri.go:89] found id: "85114969bcff21243ab3358a73a3906a47130df82e3ebc585bcc4d4188c56cf1"
	I1210 07:30:52.528535  389167 cri.go:89] found id: "93c6c5614c0a9a2eb74cabfde767b532e188ff8cd89a94813838268e5804151b"
	I1210 07:30:52.528540  389167 cri.go:89] found id: "2f1be47dacec87c0a56b3cd1eb73c6d247005830f78472193131c56d054bcc4a"
	I1210 07:30:52.528544  389167 cri.go:89] found id: "c796a7524dca5dedee39b3961c5997d3d72f431343a00508663733fc1be6878c"
	I1210 07:30:52.528548  389167 cri.go:89] found id: "31f1c3a5096a8785d7b60bd481c9f3c1cf0709868ce6cb1cc139db75f52c1f42"
	I1210 07:30:52.528551  389167 cri.go:89] found id: "7eebbffe34f04d7ad5bdc3e5c3aef890263f8a90ca947aaeeca6359c36cc1191"
	I1210 07:30:52.528555  389167 cri.go:89] found id: "7513d10c4f49be5d24f91c1199a5db71465badd2bf9a2a1a84bcf9829a3c814e"
	I1210 07:30:52.528558  389167 cri.go:89] found id: "5668f24e462ca639e29b3d82f2cc864344a08ca26a729049642a2924016b13e4"
	I1210 07:30:52.528561  389167 cri.go:89] found id: "03d15632d265591c79e3367a3210fd42b6febf2d5d06f75bf11cf5ba909f8df7"
	I1210 07:30:52.528568  389167 cri.go:89] found id: "066dde63f910d4f04f548debd89f326b5a438ce283264953c1a7666cbeaf01dd"
	I1210 07:30:52.528571  389167 cri.go:89] found id: "607af77f9ee4c14a40adf7d8a6d9170dbdb3216cdeee2e67303dfa6b22ceb2ab"
	I1210 07:30:52.528574  389167 cri.go:89] found id: "b99ece48f039a1adf74d1e66696d0dfe4716da14982d4112abccc91ba77922e1"
	I1210 07:30:52.528577  389167 cri.go:89] found id: "44b34e0a56c7013a6bda4740b0b76cc47606a7950d05f25fc59201a8c1464857"
	I1210 07:30:52.528580  389167 cri.go:89] found id: "b51e7b373a333c5307dd63dd8ebd295dff1e1b749b51e9aaca3e63155b75c735"
	I1210 07:30:52.528584  389167 cri.go:89] found id: "4c160e32d403d540fd0f5076cbbb334cc640351c4da249136985c61c8ddb07f9"
	I1210 07:30:52.528593  389167 cri.go:89] found id: "ea57c044de907492a9a5cd673e06bf24ec59b5559c417648fe898ca3c4fa4e4f"
	I1210 07:30:52.528597  389167 cri.go:89] found id: "010ebc9ab887dac18a762a98373849d39c683302618e70cd88a2ca7e38a5535f"
	I1210 07:30:52.528602  389167 cri.go:89] found id: "bf6af03dc750859dc6c94c9e6d82d48b4876eee71ec2028f9a291deb7578562c"
	I1210 07:30:52.528605  389167 cri.go:89] found id: "cd4a11fe2765202ff0a3776d921d19a73423afa2a34ebc3326d6039c5e6a30d7"
	I1210 07:30:52.528608  389167 cri.go:89] found id: "423282b955e322a79f61f0e6396aaf4856be9146d169ba0017ff6c9344317365"
	I1210 07:30:52.528613  389167 cri.go:89] found id: "6f7abeab2dc46acc945548fb9227342447b975273c96c12c95c3030a8fe43009"
	I1210 07:30:52.528617  389167 cri.go:89] found id: "7e676b17ce03a2d6e29e4632f9bf789ed23b923d4e1656d76c8dc18d9a19f765"
	I1210 07:30:52.528620  389167 cri.go:89] found id: "f47e61eab5569fd31608bd377428e8097fb04caa8d1759ffcd5dafe6a7257e83"
	I1210 07:30:52.528627  389167 cri.go:89] found id: "6f9a84d527a09dd38aac5cf60b6359dfddc676ea260108b8f4f852340f51772f"
	I1210 07:30:52.528630  389167 cri.go:89] found id: ""
	I1210 07:30:52.528684  389167 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 07:30:52.543886  389167 out.go:203] 
	W1210 07:30:52.546761  389167 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:30:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:30:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 07:30:52.546797  389167 out.go:285] * 
	* 
	W1210 07:30:52.552530  389167 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:30:52.555280  389167 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-054300 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (143.86s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-rhzvh" [09687d98-7393-428b-a778-2dd61fdc3f0b] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003440485s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054300 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054300 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (291.597823ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:28:27.629288  387360 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:28:27.630197  387360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:28:27.630239  387360 out.go:374] Setting ErrFile to fd 2...
	I1210 07:28:27.630260  387360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:28:27.630547  387360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:28:27.631671  387360 mustload.go:66] Loading cluster: addons-054300
	I1210 07:28:27.632103  387360 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:28:27.632196  387360 addons.go:622] checking whether the cluster is paused
	I1210 07:28:27.632352  387360 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:28:27.632389  387360 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:28:27.632924  387360 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:28:27.652073  387360 ssh_runner.go:195] Run: systemctl --version
	I1210 07:28:27.652128  387360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:28:27.671342  387360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:28:27.774334  387360 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:28:27.774422  387360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:28:27.812752  387360 cri.go:89] found id: "93c6c5614c0a9a2eb74cabfde767b532e188ff8cd89a94813838268e5804151b"
	I1210 07:28:27.812771  387360 cri.go:89] found id: "2f1be47dacec87c0a56b3cd1eb73c6d247005830f78472193131c56d054bcc4a"
	I1210 07:28:27.812776  387360 cri.go:89] found id: "c796a7524dca5dedee39b3961c5997d3d72f431343a00508663733fc1be6878c"
	I1210 07:28:27.812780  387360 cri.go:89] found id: "31f1c3a5096a8785d7b60bd481c9f3c1cf0709868ce6cb1cc139db75f52c1f42"
	I1210 07:28:27.812783  387360 cri.go:89] found id: "7eebbffe34f04d7ad5bdc3e5c3aef890263f8a90ca947aaeeca6359c36cc1191"
	I1210 07:28:27.812787  387360 cri.go:89] found id: "7513d10c4f49be5d24f91c1199a5db71465badd2bf9a2a1a84bcf9829a3c814e"
	I1210 07:28:27.812790  387360 cri.go:89] found id: "5668f24e462ca639e29b3d82f2cc864344a08ca26a729049642a2924016b13e4"
	I1210 07:28:27.812793  387360 cri.go:89] found id: "03d15632d265591c79e3367a3210fd42b6febf2d5d06f75bf11cf5ba909f8df7"
	I1210 07:28:27.812796  387360 cri.go:89] found id: "066dde63f910d4f04f548debd89f326b5a438ce283264953c1a7666cbeaf01dd"
	I1210 07:28:27.812801  387360 cri.go:89] found id: "607af77f9ee4c14a40adf7d8a6d9170dbdb3216cdeee2e67303dfa6b22ceb2ab"
	I1210 07:28:27.812807  387360 cri.go:89] found id: "b99ece48f039a1adf74d1e66696d0dfe4716da14982d4112abccc91ba77922e1"
	I1210 07:28:27.812813  387360 cri.go:89] found id: "44b34e0a56c7013a6bda4740b0b76cc47606a7950d05f25fc59201a8c1464857"
	I1210 07:28:27.812817  387360 cri.go:89] found id: "b51e7b373a333c5307dd63dd8ebd295dff1e1b749b51e9aaca3e63155b75c735"
	I1210 07:28:27.812820  387360 cri.go:89] found id: "4c160e32d403d540fd0f5076cbbb334cc640351c4da249136985c61c8ddb07f9"
	I1210 07:28:27.812825  387360 cri.go:89] found id: "ea57c044de907492a9a5cd673e06bf24ec59b5559c417648fe898ca3c4fa4e4f"
	I1210 07:28:27.812831  387360 cri.go:89] found id: "010ebc9ab887dac18a762a98373849d39c683302618e70cd88a2ca7e38a5535f"
	I1210 07:28:27.812834  387360 cri.go:89] found id: "bf6af03dc750859dc6c94c9e6d82d48b4876eee71ec2028f9a291deb7578562c"
	I1210 07:28:27.812837  387360 cri.go:89] found id: "cd4a11fe2765202ff0a3776d921d19a73423afa2a34ebc3326d6039c5e6a30d7"
	I1210 07:28:27.812840  387360 cri.go:89] found id: "423282b955e322a79f61f0e6396aaf4856be9146d169ba0017ff6c9344317365"
	I1210 07:28:27.812843  387360 cri.go:89] found id: "6f7abeab2dc46acc945548fb9227342447b975273c96c12c95c3030a8fe43009"
	I1210 07:28:27.812847  387360 cri.go:89] found id: "7e676b17ce03a2d6e29e4632f9bf789ed23b923d4e1656d76c8dc18d9a19f765"
	I1210 07:28:27.812850  387360 cri.go:89] found id: "f47e61eab5569fd31608bd377428e8097fb04caa8d1759ffcd5dafe6a7257e83"
	I1210 07:28:27.812853  387360 cri.go:89] found id: "6f9a84d527a09dd38aac5cf60b6359dfddc676ea260108b8f4f852340f51772f"
	I1210 07:28:27.812855  387360 cri.go:89] found id: ""
	I1210 07:28:27.812900  387360 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 07:28:27.836192  387360 out.go:203] 
	W1210 07:28:27.839195  387360 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:28:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:28:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 07:28:27.839218  387360 out.go:285] * 
	* 
	W1210 07:28:27.851996  387360 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:28:27.856123  387360 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-054300 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.36s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 10.262406ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-pcvgr" [c9ff9565-6dd1-4ac7-a023-ba2b2e1325bd] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003003967s
addons_test.go:465: (dbg) Run:  kubectl --context addons-054300 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054300 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054300 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (259.070595ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:28:34.018375  387741 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:28:34.019725  387741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:28:34.019782  387741 out.go:374] Setting ErrFile to fd 2...
	I1210 07:28:34.019803  387741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:28:34.020147  387741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:28:34.020545  387741 mustload.go:66] Loading cluster: addons-054300
	I1210 07:28:34.021035  387741 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:28:34.021085  387741 addons.go:622] checking whether the cluster is paused
	I1210 07:28:34.021231  387741 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:28:34.021269  387741 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:28:34.021863  387741 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:28:34.049147  387741 ssh_runner.go:195] Run: systemctl --version
	I1210 07:28:34.049223  387741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:28:34.069465  387741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:28:34.165456  387741 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:28:34.165589  387741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:28:34.195091  387741 cri.go:89] found id: "93c6c5614c0a9a2eb74cabfde767b532e188ff8cd89a94813838268e5804151b"
	I1210 07:28:34.195116  387741 cri.go:89] found id: "2f1be47dacec87c0a56b3cd1eb73c6d247005830f78472193131c56d054bcc4a"
	I1210 07:28:34.195120  387741 cri.go:89] found id: "c796a7524dca5dedee39b3961c5997d3d72f431343a00508663733fc1be6878c"
	I1210 07:28:34.195124  387741 cri.go:89] found id: "31f1c3a5096a8785d7b60bd481c9f3c1cf0709868ce6cb1cc139db75f52c1f42"
	I1210 07:28:34.195137  387741 cri.go:89] found id: "7eebbffe34f04d7ad5bdc3e5c3aef890263f8a90ca947aaeeca6359c36cc1191"
	I1210 07:28:34.195142  387741 cri.go:89] found id: "7513d10c4f49be5d24f91c1199a5db71465badd2bf9a2a1a84bcf9829a3c814e"
	I1210 07:28:34.195145  387741 cri.go:89] found id: "5668f24e462ca639e29b3d82f2cc864344a08ca26a729049642a2924016b13e4"
	I1210 07:28:34.195148  387741 cri.go:89] found id: "03d15632d265591c79e3367a3210fd42b6febf2d5d06f75bf11cf5ba909f8df7"
	I1210 07:28:34.195152  387741 cri.go:89] found id: "066dde63f910d4f04f548debd89f326b5a438ce283264953c1a7666cbeaf01dd"
	I1210 07:28:34.195158  387741 cri.go:89] found id: "607af77f9ee4c14a40adf7d8a6d9170dbdb3216cdeee2e67303dfa6b22ceb2ab"
	I1210 07:28:34.195161  387741 cri.go:89] found id: "b99ece48f039a1adf74d1e66696d0dfe4716da14982d4112abccc91ba77922e1"
	I1210 07:28:34.195165  387741 cri.go:89] found id: "44b34e0a56c7013a6bda4740b0b76cc47606a7950d05f25fc59201a8c1464857"
	I1210 07:28:34.195168  387741 cri.go:89] found id: "b51e7b373a333c5307dd63dd8ebd295dff1e1b749b51e9aaca3e63155b75c735"
	I1210 07:28:34.195176  387741 cri.go:89] found id: "4c160e32d403d540fd0f5076cbbb334cc640351c4da249136985c61c8ddb07f9"
	I1210 07:28:34.195179  387741 cri.go:89] found id: "ea57c044de907492a9a5cd673e06bf24ec59b5559c417648fe898ca3c4fa4e4f"
	I1210 07:28:34.195184  387741 cri.go:89] found id: "010ebc9ab887dac18a762a98373849d39c683302618e70cd88a2ca7e38a5535f"
	I1210 07:28:34.195189  387741 cri.go:89] found id: "bf6af03dc750859dc6c94c9e6d82d48b4876eee71ec2028f9a291deb7578562c"
	I1210 07:28:34.195193  387741 cri.go:89] found id: "cd4a11fe2765202ff0a3776d921d19a73423afa2a34ebc3326d6039c5e6a30d7"
	I1210 07:28:34.195196  387741 cri.go:89] found id: "423282b955e322a79f61f0e6396aaf4856be9146d169ba0017ff6c9344317365"
	I1210 07:28:34.195198  387741 cri.go:89] found id: "6f7abeab2dc46acc945548fb9227342447b975273c96c12c95c3030a8fe43009"
	I1210 07:28:34.195203  387741 cri.go:89] found id: "7e676b17ce03a2d6e29e4632f9bf789ed23b923d4e1656d76c8dc18d9a19f765"
	I1210 07:28:34.195206  387741 cri.go:89] found id: "f47e61eab5569fd31608bd377428e8097fb04caa8d1759ffcd5dafe6a7257e83"
	I1210 07:28:34.195209  387741 cri.go:89] found id: "6f9a84d527a09dd38aac5cf60b6359dfddc676ea260108b8f4f852340f51772f"
	I1210 07:28:34.195212  387741 cri.go:89] found id: ""
	I1210 07:28:34.195268  387741 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 07:28:34.209933  387741 out.go:203] 
	W1210 07:28:34.212781  387741 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:28:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:28:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 07:28:34.212800  387741 out.go:285] * 
	* 
	W1210 07:28:34.218397  387741 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:28:34.221276  387741 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-054300 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.08s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1210 07:27:36.633713  378528 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1210 07:27:36.640326  378528 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1210 07:27:36.640358  378528 kapi.go:107] duration metric: took 9.864092ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 9.883884ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-054300 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-054300 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [850c299c-48ba-4ce8-88d8-ced25ab230e1] Pending
helpers_test.go:353: "task-pv-pod" [850c299c-48ba-4ce8-88d8-ced25ab230e1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [850c299c-48ba-4ce8-88d8-ced25ab230e1] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.004699009s
addons_test.go:574: (dbg) Run:  kubectl --context addons-054300 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-054300 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-054300 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-054300 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-054300 delete pod task-pv-pod: (1.040114598s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-054300 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-054300 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-054300 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [7ce802d5-d7bc-4fab-b4a0-629c94cf5fe8] Pending
helpers_test.go:353: "task-pv-pod-restore" [7ce802d5-d7bc-4fab-b4a0-629c94cf5fe8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [7ce802d5-d7bc-4fab-b4a0-629c94cf5fe8] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003925286s
addons_test.go:616: (dbg) Run:  kubectl --context addons-054300 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-054300 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-054300 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054300 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054300 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (288.057247ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:28:28.223628  387425 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:28:28.224421  387425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:28:28.224457  387425 out.go:374] Setting ErrFile to fd 2...
	I1210 07:28:28.224478  387425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:28:28.224760  387425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:28:28.225076  387425 mustload.go:66] Loading cluster: addons-054300
	I1210 07:28:28.225478  387425 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:28:28.225519  387425 addons.go:622] checking whether the cluster is paused
	I1210 07:28:28.225697  387425 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:28:28.225734  387425 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:28:28.226250  387425 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:28:28.246964  387425 ssh_runner.go:195] Run: systemctl --version
	I1210 07:28:28.247136  387425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:28:28.265611  387425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:28:28.365572  387425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:28:28.365692  387425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:28:28.412727  387425 cri.go:89] found id: "93c6c5614c0a9a2eb74cabfde767b532e188ff8cd89a94813838268e5804151b"
	I1210 07:28:28.412753  387425 cri.go:89] found id: "2f1be47dacec87c0a56b3cd1eb73c6d247005830f78472193131c56d054bcc4a"
	I1210 07:28:28.412759  387425 cri.go:89] found id: "c796a7524dca5dedee39b3961c5997d3d72f431343a00508663733fc1be6878c"
	I1210 07:28:28.412774  387425 cri.go:89] found id: "31f1c3a5096a8785d7b60bd481c9f3c1cf0709868ce6cb1cc139db75f52c1f42"
	I1210 07:28:28.412778  387425 cri.go:89] found id: "7eebbffe34f04d7ad5bdc3e5c3aef890263f8a90ca947aaeeca6359c36cc1191"
	I1210 07:28:28.412782  387425 cri.go:89] found id: "7513d10c4f49be5d24f91c1199a5db71465badd2bf9a2a1a84bcf9829a3c814e"
	I1210 07:28:28.412786  387425 cri.go:89] found id: "5668f24e462ca639e29b3d82f2cc864344a08ca26a729049642a2924016b13e4"
	I1210 07:28:28.412789  387425 cri.go:89] found id: "03d15632d265591c79e3367a3210fd42b6febf2d5d06f75bf11cf5ba909f8df7"
	I1210 07:28:28.412792  387425 cri.go:89] found id: "066dde63f910d4f04f548debd89f326b5a438ce283264953c1a7666cbeaf01dd"
	I1210 07:28:28.412804  387425 cri.go:89] found id: "607af77f9ee4c14a40adf7d8a6d9170dbdb3216cdeee2e67303dfa6b22ceb2ab"
	I1210 07:28:28.412810  387425 cri.go:89] found id: "b99ece48f039a1adf74d1e66696d0dfe4716da14982d4112abccc91ba77922e1"
	I1210 07:28:28.412814  387425 cri.go:89] found id: "44b34e0a56c7013a6bda4740b0b76cc47606a7950d05f25fc59201a8c1464857"
	I1210 07:28:28.412817  387425 cri.go:89] found id: "b51e7b373a333c5307dd63dd8ebd295dff1e1b749b51e9aaca3e63155b75c735"
	I1210 07:28:28.412820  387425 cri.go:89] found id: "4c160e32d403d540fd0f5076cbbb334cc640351c4da249136985c61c8ddb07f9"
	I1210 07:28:28.412824  387425 cri.go:89] found id: "ea57c044de907492a9a5cd673e06bf24ec59b5559c417648fe898ca3c4fa4e4f"
	I1210 07:28:28.412835  387425 cri.go:89] found id: "010ebc9ab887dac18a762a98373849d39c683302618e70cd88a2ca7e38a5535f"
	I1210 07:28:28.412839  387425 cri.go:89] found id: "bf6af03dc750859dc6c94c9e6d82d48b4876eee71ec2028f9a291deb7578562c"
	I1210 07:28:28.412844  387425 cri.go:89] found id: "cd4a11fe2765202ff0a3776d921d19a73423afa2a34ebc3326d6039c5e6a30d7"
	I1210 07:28:28.412847  387425 cri.go:89] found id: "423282b955e322a79f61f0e6396aaf4856be9146d169ba0017ff6c9344317365"
	I1210 07:28:28.412850  387425 cri.go:89] found id: "6f7abeab2dc46acc945548fb9227342447b975273c96c12c95c3030a8fe43009"
	I1210 07:28:28.412856  387425 cri.go:89] found id: "7e676b17ce03a2d6e29e4632f9bf789ed23b923d4e1656d76c8dc18d9a19f765"
	I1210 07:28:28.412861  387425 cri.go:89] found id: "f47e61eab5569fd31608bd377428e8097fb04caa8d1759ffcd5dafe6a7257e83"
	I1210 07:28:28.412864  387425 cri.go:89] found id: "6f9a84d527a09dd38aac5cf60b6359dfddc676ea260108b8f4f852340f51772f"
	I1210 07:28:28.412868  387425 cri.go:89] found id: ""
	I1210 07:28:28.412922  387425 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 07:28:28.428563  387425 out.go:203] 
	W1210 07:28:28.431409  387425 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:28:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:28:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 07:28:28.431439  387425 out.go:285] * 
	* 
	W1210 07:28:28.437097  387425 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:28:28.440178  387425 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-054300 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054300 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054300 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (258.243847ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:28:28.503146  387467 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:28:28.503944  387467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:28:28.503988  387467 out.go:374] Setting ErrFile to fd 2...
	I1210 07:28:28.504011  387467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:28:28.504338  387467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:28:28.504690  387467 mustload.go:66] Loading cluster: addons-054300
	I1210 07:28:28.505136  387467 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:28:28.505176  387467 addons.go:622] checking whether the cluster is paused
	I1210 07:28:28.505331  387467 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:28:28.505362  387467 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:28:28.505940  387467 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:28:28.523849  387467 ssh_runner.go:195] Run: systemctl --version
	I1210 07:28:28.523908  387467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:28:28.543114  387467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:28:28.641586  387467 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:28:28.641665  387467 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:28:28.670531  387467 cri.go:89] found id: "93c6c5614c0a9a2eb74cabfde767b532e188ff8cd89a94813838268e5804151b"
	I1210 07:28:28.670554  387467 cri.go:89] found id: "2f1be47dacec87c0a56b3cd1eb73c6d247005830f78472193131c56d054bcc4a"
	I1210 07:28:28.670559  387467 cri.go:89] found id: "c796a7524dca5dedee39b3961c5997d3d72f431343a00508663733fc1be6878c"
	I1210 07:28:28.670569  387467 cri.go:89] found id: "31f1c3a5096a8785d7b60bd481c9f3c1cf0709868ce6cb1cc139db75f52c1f42"
	I1210 07:28:28.670573  387467 cri.go:89] found id: "7eebbffe34f04d7ad5bdc3e5c3aef890263f8a90ca947aaeeca6359c36cc1191"
	I1210 07:28:28.670576  387467 cri.go:89] found id: "7513d10c4f49be5d24f91c1199a5db71465badd2bf9a2a1a84bcf9829a3c814e"
	I1210 07:28:28.670580  387467 cri.go:89] found id: "5668f24e462ca639e29b3d82f2cc864344a08ca26a729049642a2924016b13e4"
	I1210 07:28:28.670584  387467 cri.go:89] found id: "03d15632d265591c79e3367a3210fd42b6febf2d5d06f75bf11cf5ba909f8df7"
	I1210 07:28:28.670587  387467 cri.go:89] found id: "066dde63f910d4f04f548debd89f326b5a438ce283264953c1a7666cbeaf01dd"
	I1210 07:28:28.670594  387467 cri.go:89] found id: "607af77f9ee4c14a40adf7d8a6d9170dbdb3216cdeee2e67303dfa6b22ceb2ab"
	I1210 07:28:28.670597  387467 cri.go:89] found id: "b99ece48f039a1adf74d1e66696d0dfe4716da14982d4112abccc91ba77922e1"
	I1210 07:28:28.670601  387467 cri.go:89] found id: "44b34e0a56c7013a6bda4740b0b76cc47606a7950d05f25fc59201a8c1464857"
	I1210 07:28:28.670604  387467 cri.go:89] found id: "b51e7b373a333c5307dd63dd8ebd295dff1e1b749b51e9aaca3e63155b75c735"
	I1210 07:28:28.670608  387467 cri.go:89] found id: "4c160e32d403d540fd0f5076cbbb334cc640351c4da249136985c61c8ddb07f9"
	I1210 07:28:28.670611  387467 cri.go:89] found id: "ea57c044de907492a9a5cd673e06bf24ec59b5559c417648fe898ca3c4fa4e4f"
	I1210 07:28:28.670616  387467 cri.go:89] found id: "010ebc9ab887dac18a762a98373849d39c683302618e70cd88a2ca7e38a5535f"
	I1210 07:28:28.670624  387467 cri.go:89] found id: "bf6af03dc750859dc6c94c9e6d82d48b4876eee71ec2028f9a291deb7578562c"
	I1210 07:28:28.670628  387467 cri.go:89] found id: "cd4a11fe2765202ff0a3776d921d19a73423afa2a34ebc3326d6039c5e6a30d7"
	I1210 07:28:28.670631  387467 cri.go:89] found id: "423282b955e322a79f61f0e6396aaf4856be9146d169ba0017ff6c9344317365"
	I1210 07:28:28.670634  387467 cri.go:89] found id: "6f7abeab2dc46acc945548fb9227342447b975273c96c12c95c3030a8fe43009"
	I1210 07:28:28.670639  387467 cri.go:89] found id: "7e676b17ce03a2d6e29e4632f9bf789ed23b923d4e1656d76c8dc18d9a19f765"
	I1210 07:28:28.670642  387467 cri.go:89] found id: "f47e61eab5569fd31608bd377428e8097fb04caa8d1759ffcd5dafe6a7257e83"
	I1210 07:28:28.670646  387467 cri.go:89] found id: "6f9a84d527a09dd38aac5cf60b6359dfddc676ea260108b8f4f852340f51772f"
	I1210 07:28:28.670649  387467 cri.go:89] found id: ""
	I1210 07:28:28.670699  387467 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 07:28:28.685886  387467 out.go:203] 
	W1210 07:28:28.688696  387467 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:28:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:28:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 07:28:28.688720  387467 out.go:285] * 
	* 
	W1210 07:28:28.694318  387467 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:28:28.697172  387467 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-054300 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (52.08s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-054300 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-054300 --alsologtostderr -v=1: exit status 11 (273.277462ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:28:18.242842  386744 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:28:18.243515  386744 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:28:18.243532  386744 out.go:374] Setting ErrFile to fd 2...
	I1210 07:28:18.243539  386744 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:28:18.243948  386744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:28:18.244894  386744 mustload.go:66] Loading cluster: addons-054300
	I1210 07:28:18.245351  386744 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:28:18.245401  386744 addons.go:622] checking whether the cluster is paused
	I1210 07:28:18.245548  386744 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:28:18.245586  386744 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:28:18.246138  386744 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:28:18.265064  386744 ssh_runner.go:195] Run: systemctl --version
	I1210 07:28:18.265117  386744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:28:18.282801  386744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:28:18.396813  386744 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:28:18.396902  386744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:28:18.426952  386744 cri.go:89] found id: "93c6c5614c0a9a2eb74cabfde767b532e188ff8cd89a94813838268e5804151b"
	I1210 07:28:18.426976  386744 cri.go:89] found id: "2f1be47dacec87c0a56b3cd1eb73c6d247005830f78472193131c56d054bcc4a"
	I1210 07:28:18.426996  386744 cri.go:89] found id: "c796a7524dca5dedee39b3961c5997d3d72f431343a00508663733fc1be6878c"
	I1210 07:28:18.427000  386744 cri.go:89] found id: "31f1c3a5096a8785d7b60bd481c9f3c1cf0709868ce6cb1cc139db75f52c1f42"
	I1210 07:28:18.427044  386744 cri.go:89] found id: "7eebbffe34f04d7ad5bdc3e5c3aef890263f8a90ca947aaeeca6359c36cc1191"
	I1210 07:28:18.427054  386744 cri.go:89] found id: "7513d10c4f49be5d24f91c1199a5db71465badd2bf9a2a1a84bcf9829a3c814e"
	I1210 07:28:18.427057  386744 cri.go:89] found id: "5668f24e462ca639e29b3d82f2cc864344a08ca26a729049642a2924016b13e4"
	I1210 07:28:18.427060  386744 cri.go:89] found id: "03d15632d265591c79e3367a3210fd42b6febf2d5d06f75bf11cf5ba909f8df7"
	I1210 07:28:18.427085  386744 cri.go:89] found id: "066dde63f910d4f04f548debd89f326b5a438ce283264953c1a7666cbeaf01dd"
	I1210 07:28:18.427092  386744 cri.go:89] found id: "607af77f9ee4c14a40adf7d8a6d9170dbdb3216cdeee2e67303dfa6b22ceb2ab"
	I1210 07:28:18.427098  386744 cri.go:89] found id: "b99ece48f039a1adf74d1e66696d0dfe4716da14982d4112abccc91ba77922e1"
	I1210 07:28:18.427102  386744 cri.go:89] found id: "44b34e0a56c7013a6bda4740b0b76cc47606a7950d05f25fc59201a8c1464857"
	I1210 07:28:18.427107  386744 cri.go:89] found id: "b51e7b373a333c5307dd63dd8ebd295dff1e1b749b51e9aaca3e63155b75c735"
	I1210 07:28:18.427110  386744 cri.go:89] found id: "4c160e32d403d540fd0f5076cbbb334cc640351c4da249136985c61c8ddb07f9"
	I1210 07:28:18.427115  386744 cri.go:89] found id: "ea57c044de907492a9a5cd673e06bf24ec59b5559c417648fe898ca3c4fa4e4f"
	I1210 07:28:18.427123  386744 cri.go:89] found id: "010ebc9ab887dac18a762a98373849d39c683302618e70cd88a2ca7e38a5535f"
	I1210 07:28:18.427130  386744 cri.go:89] found id: "bf6af03dc750859dc6c94c9e6d82d48b4876eee71ec2028f9a291deb7578562c"
	I1210 07:28:18.427135  386744 cri.go:89] found id: "cd4a11fe2765202ff0a3776d921d19a73423afa2a34ebc3326d6039c5e6a30d7"
	I1210 07:28:18.427138  386744 cri.go:89] found id: "423282b955e322a79f61f0e6396aaf4856be9146d169ba0017ff6c9344317365"
	I1210 07:28:18.427161  386744 cri.go:89] found id: "6f7abeab2dc46acc945548fb9227342447b975273c96c12c95c3030a8fe43009"
	I1210 07:28:18.427168  386744 cri.go:89] found id: "7e676b17ce03a2d6e29e4632f9bf789ed23b923d4e1656d76c8dc18d9a19f765"
	I1210 07:28:18.427175  386744 cri.go:89] found id: "f47e61eab5569fd31608bd377428e8097fb04caa8d1759ffcd5dafe6a7257e83"
	I1210 07:28:18.427178  386744 cri.go:89] found id: "6f9a84d527a09dd38aac5cf60b6359dfddc676ea260108b8f4f852340f51772f"
	I1210 07:28:18.427181  386744 cri.go:89] found id: ""
	I1210 07:28:18.427252  386744 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 07:28:18.443330  386744 out.go:203] 
	W1210 07:28:18.446297  386744 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:28:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:28:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 07:28:18.446320  386744 out.go:285] * 
	* 
	W1210 07:28:18.451966  386744 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:28:18.454924  386744 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-054300 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-054300
helpers_test.go:244: (dbg) docker inspect addons-054300:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc22f5170a29040c792fbce50e00f81449bfae5eee3653f8df540be03d7de55a",
	        "Created": "2025-12-10T07:25:12.430115897Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 379935,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:25:12.487333909Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/dc22f5170a29040c792fbce50e00f81449bfae5eee3653f8df540be03d7de55a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc22f5170a29040c792fbce50e00f81449bfae5eee3653f8df540be03d7de55a/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc22f5170a29040c792fbce50e00f81449bfae5eee3653f8df540be03d7de55a/hosts",
	        "LogPath": "/var/lib/docker/containers/dc22f5170a29040c792fbce50e00f81449bfae5eee3653f8df540be03d7de55a/dc22f5170a29040c792fbce50e00f81449bfae5eee3653f8df540be03d7de55a-json.log",
	        "Name": "/addons-054300",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-054300:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-054300",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dc22f5170a29040c792fbce50e00f81449bfae5eee3653f8df540be03d7de55a",
	                "LowerDir": "/var/lib/docker/overlay2/518f1a22107b7ce6dc31dc1d0178c0e8732e9c2b2ac4312bf7116070f3c344af-init/diff:/var/lib/docker/overlay2/888a54fd4518421bb2e30bc2f0825f232bffa2f260991e8ba288270662a6554b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/518f1a22107b7ce6dc31dc1d0178c0e8732e9c2b2ac4312bf7116070f3c344af/merged",
	                "UpperDir": "/var/lib/docker/overlay2/518f1a22107b7ce6dc31dc1d0178c0e8732e9c2b2ac4312bf7116070f3c344af/diff",
	                "WorkDir": "/var/lib/docker/overlay2/518f1a22107b7ce6dc31dc1d0178c0e8732e9c2b2ac4312bf7116070f3c344af/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-054300",
	                "Source": "/var/lib/docker/volumes/addons-054300/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-054300",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-054300",
	                "name.minikube.sigs.k8s.io": "addons-054300",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8e20225553d498f5b200facbcb4b592c80a6f5c28a0de5dc3fadf37ea92e8446",
	            "SandboxKey": "/var/run/docker/netns/8e20225553d4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-054300": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:76:54:fa:4e:96",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9407389f9dc9cc594903ffd05e318da54dbe021a447cc9aa16ccc948c918da56",
	                    "EndpointID": "a68d08196d76b97ddf9123fb278d9e51a1da6e0e19141e1e6a97994a8f3201d6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-054300",
	                        "dc22f5170a29"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-054300 -n addons-054300
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-054300 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-054300 logs -n 25: (1.681967878s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │ 10 Dec 25 07:24 UTC │
	│ delete  │ -p download-only-417315                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-417315   │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │ 10 Dec 25 07:24 UTC │
	│ start   │ -o=json --download-only -p download-only-121667 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-121667   │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │ 10 Dec 25 07:24 UTC │
	│ delete  │ -p download-only-121667                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-121667   │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │ 10 Dec 25 07:24 UTC │
	│ delete  │ -p download-only-941654                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-941654   │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │ 10 Dec 25 07:24 UTC │
	│ delete  │ -p download-only-417315                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-417315   │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │ 10 Dec 25 07:24 UTC │
	│ delete  │ -p download-only-121667                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-121667   │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │ 10 Dec 25 07:24 UTC │
	│ start   │ --download-only -p download-docker-393659 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-393659 │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │                     │
	│ delete  │ -p download-docker-393659                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-393659 │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │ 10 Dec 25 07:24 UTC │
	│ start   │ --download-only -p binary-mirror-728513 --alsologtostderr --binary-mirror http://127.0.0.1:37169 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-728513   │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │                     │
	│ delete  │ -p binary-mirror-728513                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-728513   │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │ 10 Dec 25 07:24 UTC │
	│ addons  │ enable dashboard -p addons-054300                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │                     │
	│ addons  │ disable dashboard -p addons-054300                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │                     │
	│ start   │ -p addons-054300 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │ 10 Dec 25 07:27 UTC │
	│ addons  │ addons-054300 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:27 UTC │                     │
	│ addons  │ addons-054300 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:27 UTC │                     │
	│ ip      │ addons-054300 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:27 UTC │ 10 Dec 25 07:27 UTC │
	│ addons  │ addons-054300 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:27 UTC │                     │
	│ addons  │ addons-054300 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:27 UTC │                     │
	│ addons  │ addons-054300 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:28 UTC │                     │
	│ ssh     │ addons-054300 ssh cat /opt/local-path-provisioner/pvc-f752037c-3c31-451d-be8f-825295773e36_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:28 UTC │ 10 Dec 25 07:28 UTC │
	│ addons  │ addons-054300 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:28 UTC │                     │
	│ addons  │ addons-054300 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:28 UTC │                     │
	│ addons  │ enable headlamp -p addons-054300 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-054300          │ jenkins │ v1.37.0 │ 10 Dec 25 07:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:24:47
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:24:47.218606  379535 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:24:47.218723  379535 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:24:47.218739  379535 out.go:374] Setting ErrFile to fd 2...
	I1210 07:24:47.218746  379535 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:24:47.219143  379535 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:24:47.219687  379535 out.go:368] Setting JSON to false
	I1210 07:24:47.220529  379535 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7638,"bootTime":1765343850,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:24:47.220626  379535 start.go:143] virtualization:  
	I1210 07:24:47.224498  379535 out.go:179] * [addons-054300] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:24:47.228394  379535 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:24:47.228480  379535 notify.go:221] Checking for updates...
	I1210 07:24:47.234361  379535 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:24:47.237347  379535 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:24:47.240337  379535 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 07:24:47.243259  379535 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:24:47.246125  379535 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:24:47.249174  379535 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:24:47.279977  379535 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:24:47.280093  379535 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:24:47.340471  379535 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-10 07:24:47.331358298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:24:47.340579  379535 docker.go:319] overlay module found
	I1210 07:24:47.343820  379535 out.go:179] * Using the docker driver based on user configuration
	I1210 07:24:47.346682  379535 start.go:309] selected driver: docker
	I1210 07:24:47.346700  379535 start.go:927] validating driver "docker" against <nil>
	I1210 07:24:47.346713  379535 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:24:47.347458  379535 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:24:47.404159  379535 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-10 07:24:47.394471336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:24:47.404313  379535 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:24:47.404541  379535 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:24:47.407436  379535 out.go:179] * Using Docker driver with root privileges
	I1210 07:24:47.410278  379535 cni.go:84] Creating CNI manager for ""
	I1210 07:24:47.410353  379535 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:24:47.410363  379535 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 07:24:47.410446  379535 start.go:353] cluster config:
	{Name:addons-054300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-054300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1210 07:24:47.413643  379535 out.go:179] * Starting "addons-054300" primary control-plane node in "addons-054300" cluster
	I1210 07:24:47.416474  379535 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 07:24:47.419420  379535 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:24:47.422238  379535 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 07:24:47.422293  379535 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1210 07:24:47.422315  379535 cache.go:65] Caching tarball of preloaded images
	I1210 07:24:47.422414  379535 preload.go:238] Found /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1210 07:24:47.422424  379535 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 07:24:47.422765  379535 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/config.json ...
	I1210 07:24:47.422786  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/config.json: {Name:mk379c617c8daf139aef95276096a9d1c3831632 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:24:47.422946  379535 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:24:47.438786  379535 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca to local cache
	I1210 07:24:47.438932  379535 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local cache directory
	I1210 07:24:47.438951  379535 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local cache directory, skipping pull
	I1210 07:24:47.438956  379535 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in cache, skipping pull
	I1210 07:24:47.438963  379535 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca as a tarball
	I1210 07:24:47.438967  379535 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca from local cache
	I1210 07:25:05.474702  379535 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca from cached tarball
	I1210 07:25:05.474742  379535 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:25:05.474781  379535 start.go:360] acquireMachinesLock for addons-054300: {Name:mk5475be5e895678590cbabe8e033afffb7fa95a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:25:05.474898  379535 start.go:364] duration metric: took 93.097µs to acquireMachinesLock for "addons-054300"
	I1210 07:25:05.474929  379535 start.go:93] Provisioning new machine with config: &{Name:addons-054300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-054300 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 07:25:05.475034  379535 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:25:05.476547  379535 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1210 07:25:05.476783  379535 start.go:159] libmachine.API.Create for "addons-054300" (driver="docker")
	I1210 07:25:05.476820  379535 client.go:173] LocalClient.Create starting
	I1210 07:25:05.476934  379535 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem
	I1210 07:25:05.816166  379535 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem
	I1210 07:25:06.072129  379535 cli_runner.go:164] Run: docker network inspect addons-054300 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:25:06.087708  379535 cli_runner.go:211] docker network inspect addons-054300 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:25:06.087815  379535 network_create.go:284] running [docker network inspect addons-054300] to gather additional debugging logs...
	I1210 07:25:06.087838  379535 cli_runner.go:164] Run: docker network inspect addons-054300
	W1210 07:25:06.104201  379535 cli_runner.go:211] docker network inspect addons-054300 returned with exit code 1
	I1210 07:25:06.104241  379535 network_create.go:287] error running [docker network inspect addons-054300]: docker network inspect addons-054300: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-054300 not found
	I1210 07:25:06.104256  379535 network_create.go:289] output of [docker network inspect addons-054300]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-054300 not found
	
	** /stderr **
	I1210 07:25:06.104394  379535 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:25:06.121312  379535 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001941c80}
	I1210 07:25:06.121359  379535 network_create.go:124] attempt to create docker network addons-054300 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1210 07:25:06.121416  379535 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-054300 addons-054300
	I1210 07:25:06.178427  379535 network_create.go:108] docker network addons-054300 192.168.49.0/24 created
	I1210 07:25:06.178462  379535 kic.go:121] calculated static IP "192.168.49.2" for the "addons-054300" container
	I1210 07:25:06.178536  379535 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:25:06.193801  379535 cli_runner.go:164] Run: docker volume create addons-054300 --label name.minikube.sigs.k8s.io=addons-054300 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:25:06.209746  379535 oci.go:103] Successfully created a docker volume addons-054300
	I1210 07:25:06.209846  379535 cli_runner.go:164] Run: docker run --rm --name addons-054300-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-054300 --entrypoint /usr/bin/test -v addons-054300:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 07:25:08.361395  379535 cli_runner.go:217] Completed: docker run --rm --name addons-054300-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-054300 --entrypoint /usr/bin/test -v addons-054300:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib: (2.151494222s)
	I1210 07:25:08.361430  379535 oci.go:107] Successfully prepared a docker volume addons-054300
	I1210 07:25:08.361476  379535 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 07:25:08.361489  379535 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 07:25:08.361561  379535 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-054300:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 07:25:12.356494  379535 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-054300:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (3.994893742s)
	I1210 07:25:12.356527  379535 kic.go:203] duration metric: took 3.995033912s to extract preloaded images to volume ...
	W1210 07:25:12.356667  379535 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 07:25:12.356785  379535 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:25:12.415334  379535 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-054300 --name addons-054300 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-054300 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-054300 --network addons-054300 --ip 192.168.49.2 --volume addons-054300:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	I1210 07:25:12.685650  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Running}}
	I1210 07:25:12.707897  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:12.730739  379535 cli_runner.go:164] Run: docker exec addons-054300 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:25:12.782213  379535 oci.go:144] the created container "addons-054300" has a running status.
	I1210 07:25:12.782240  379535 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa...
	I1210 07:25:13.353580  379535 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:25:13.373998  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:13.390770  379535 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:25:13.390796  379535 kic_runner.go:114] Args: [docker exec --privileged addons-054300 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:25:13.433178  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:13.450298  379535 machine.go:94] provisionDockerMachine start ...
	I1210 07:25:13.450417  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:13.467961  379535 main.go:143] libmachine: Using SSH client type: native
	I1210 07:25:13.468294  379535 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1210 07:25:13.468311  379535 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:25:13.468925  379535 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35952->127.0.0.1:33143: read: connection reset by peer
	I1210 07:25:16.602423  379535 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-054300
	
	I1210 07:25:16.602447  379535 ubuntu.go:182] provisioning hostname "addons-054300"
	I1210 07:25:16.602512  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:16.619323  379535 main.go:143] libmachine: Using SSH client type: native
	I1210 07:25:16.619635  379535 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1210 07:25:16.619650  379535 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-054300 && echo "addons-054300" | sudo tee /etc/hostname
	I1210 07:25:16.761901  379535 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-054300
	
	I1210 07:25:16.761998  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:16.780775  379535 main.go:143] libmachine: Using SSH client type: native
	I1210 07:25:16.781102  379535 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1210 07:25:16.781117  379535 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-054300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-054300/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-054300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:25:16.919095  379535 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:25:16.919125  379535 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-376671/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-376671/.minikube}
	I1210 07:25:16.919156  379535 ubuntu.go:190] setting up certificates
	I1210 07:25:16.919173  379535 provision.go:84] configureAuth start
	I1210 07:25:16.919236  379535 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-054300
	I1210 07:25:16.935792  379535 provision.go:143] copyHostCerts
	I1210 07:25:16.935880  379535 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem (1675 bytes)
	I1210 07:25:16.936000  379535 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem (1082 bytes)
	I1210 07:25:16.936060  379535 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem (1123 bytes)
	I1210 07:25:16.936104  379535 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem org=jenkins.addons-054300 san=[127.0.0.1 192.168.49.2 addons-054300 localhost minikube]
	I1210 07:25:17.210221  379535 provision.go:177] copyRemoteCerts
	I1210 07:25:17.210290  379535 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:25:17.210339  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:17.227542  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:17.322802  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:25:17.340341  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 07:25:17.357777  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 07:25:17.377216  379535 provision.go:87] duration metric: took 458.025542ms to configureAuth
	I1210 07:25:17.377248  379535 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:25:17.377447  379535 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:25:17.377559  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:17.394952  379535 main.go:143] libmachine: Using SSH client type: native
	I1210 07:25:17.395314  379535 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1210 07:25:17.395337  379535 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 07:25:17.682930  379535 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 07:25:17.682949  379535 machine.go:97] duration metric: took 4.23262496s to provisionDockerMachine
	I1210 07:25:17.682959  379535 client.go:176] duration metric: took 12.20613005s to LocalClient.Create
	I1210 07:25:17.682970  379535 start.go:167] duration metric: took 12.206188783s to libmachine.API.Create "addons-054300"
	I1210 07:25:17.682976  379535 start.go:293] postStartSetup for "addons-054300" (driver="docker")
	I1210 07:25:17.682986  379535 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:25:17.683079  379535 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:25:17.683132  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:17.700183  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:17.794603  379535 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:25:17.797708  379535 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:25:17.797737  379535 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:25:17.797749  379535 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/addons for local assets ...
	I1210 07:25:17.797814  379535 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/files for local assets ...
	I1210 07:25:17.797841  379535 start.go:296] duration metric: took 114.859215ms for postStartSetup
	I1210 07:25:17.798153  379535 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-054300
	I1210 07:25:17.814914  379535 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/config.json ...
	I1210 07:25:17.815226  379535 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:25:17.815279  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:17.831700  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:17.928184  379535 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:25:17.932943  379535 start.go:128] duration metric: took 12.457888416s to createHost
	I1210 07:25:17.932971  379535 start.go:83] releasing machines lock for "addons-054300", held for 12.458057295s
	I1210 07:25:17.933042  379535 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-054300
	I1210 07:25:17.955083  379535 ssh_runner.go:195] Run: cat /version.json
	I1210 07:25:17.955123  379535 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:25:17.955156  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:17.955203  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:17.973722  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:17.976966  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:18.166947  379535 ssh_runner.go:195] Run: systemctl --version
	I1210 07:25:18.173006  379535 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 07:25:18.210955  379535 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:25:18.215736  379535 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:25:18.215812  379535 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:25:18.246211  379535 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 07:25:18.246239  379535 start.go:496] detecting cgroup driver to use...
	I1210 07:25:18.246273  379535 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:25:18.246328  379535 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:25:18.263964  379535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:25:18.277028  379535 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:25:18.277114  379535 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:25:18.294661  379535 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:25:18.313256  379535 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:25:18.434176  379535 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:25:18.549532  379535 docker.go:234] disabling docker service ...
	I1210 07:25:18.549646  379535 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:25:18.570441  379535 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:25:18.583494  379535 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:25:18.697608  379535 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:25:18.818933  379535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:25:18.832562  379535 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:25:18.848261  379535 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 07:25:18.848340  379535 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:25:18.857134  379535 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 07:25:18.857257  379535 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:25:18.866348  379535 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:25:18.875386  379535 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:25:18.884266  379535 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:25:18.892384  379535 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:25:18.900937  379535 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:25:18.913750  379535 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:25:18.922204  379535 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:25:18.929700  379535 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:25:18.936714  379535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:25:19.059141  379535 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 07:25:19.230038  379535 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 07:25:19.230142  379535 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 07:25:19.234032  379535 start.go:564] Will wait 60s for crictl version
	I1210 07:25:19.234118  379535 ssh_runner.go:195] Run: which crictl
	I1210 07:25:19.237507  379535 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:25:19.264531  379535 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 07:25:19.264689  379535 ssh_runner.go:195] Run: crio --version
	I1210 07:25:19.295296  379535 ssh_runner.go:195] Run: crio --version
	I1210 07:25:19.329827  379535 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1210 07:25:19.332622  379535 cli_runner.go:164] Run: docker network inspect addons-054300 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:25:19.349151  379535 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 07:25:19.352931  379535 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:25:19.362552  379535 kubeadm.go:884] updating cluster {Name:addons-054300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-054300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:25:19.362681  379535 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 07:25:19.362745  379535 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:25:19.400212  379535 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:25:19.400240  379535 crio.go:433] Images already preloaded, skipping extraction
	I1210 07:25:19.400297  379535 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:25:19.424822  379535 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:25:19.424849  379535 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:25:19.424857  379535 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1210 07:25:19.424955  379535 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-054300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-054300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:25:19.425039  379535 ssh_runner.go:195] Run: crio config
	I1210 07:25:19.475377  379535 cni.go:84] Creating CNI manager for ""
	I1210 07:25:19.475400  379535 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:25:19.475418  379535 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:25:19.475442  379535 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-054300 NodeName:addons-054300 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:25:19.475575  379535 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-054300"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:25:19.475656  379535 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 07:25:19.483645  379535 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:25:19.483720  379535 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:25:19.491413  379535 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1210 07:25:19.504415  379535 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 07:25:19.517408  379535 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1210 07:25:19.530382  379535 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:25:19.534012  379535 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:25:19.544387  379535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:25:19.651925  379535 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:25:19.668332  379535 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300 for IP: 192.168.49.2
	I1210 07:25:19.668356  379535 certs.go:195] generating shared ca certs ...
	I1210 07:25:19.668386  379535 certs.go:227] acquiring lock for ca certs: {Name:mk8f0c12407c084bd20b81428ac0dabca9a8cbd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:19.668540  379535 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key
	I1210 07:25:20.330721  379535 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt ...
	I1210 07:25:20.330758  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt: {Name:mk3644e11bdcb0925a9a05bad1e0e3fca414ff61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:20.330994  379535 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key ...
	I1210 07:25:20.331036  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key: {Name:mkbd491725c3973182b429cc0698bef0142dee42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:20.331139  379535 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key
	I1210 07:25:20.652005  379535 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt ...
	I1210 07:25:20.652039  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt: {Name:mk1d9cab3816c24cf58418acb5b2427e8af1ed22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:20.652235  379535 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key ...
	I1210 07:25:20.652249  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key: {Name:mk5cf3672a5f26dcedf3b7e878f4e247d5d21fc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:20.652331  379535 certs.go:257] generating profile certs ...
	I1210 07:25:20.652397  379535 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.key
	I1210 07:25:20.652415  379535 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt with IP's: []
	I1210 07:25:21.188695  379535 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt ...
	I1210 07:25:21.188730  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: {Name:mk390f4e644bc83243db754d72329bce977b5ca9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:21.188931  379535 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.key ...
	I1210 07:25:21.188946  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.key: {Name:mk54c3651d6e559b24dc9640918369d8c10570cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:21.189036  379535 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.key.5ca2ca84
	I1210 07:25:21.189058  379535 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.crt.5ca2ca84 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1210 07:25:21.496556  379535 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.crt.5ca2ca84 ...
	I1210 07:25:21.496592  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.crt.5ca2ca84: {Name:mk66955cc7b6be36bc2ca2ad143c24a06520bbaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:21.496774  379535 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.key.5ca2ca84 ...
	I1210 07:25:21.496793  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.key.5ca2ca84: {Name:mk36e26c8bb9140805c993e13a8c5793bb88983a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:21.496880  379535 certs.go:382] copying /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.crt.5ca2ca84 -> /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.crt
	I1210 07:25:21.496961  379535 certs.go:386] copying /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.key.5ca2ca84 -> /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.key
	I1210 07:25:21.497020  379535 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/proxy-client.key
	I1210 07:25:21.497046  379535 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/proxy-client.crt with IP's: []
	I1210 07:25:22.020101  379535 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/proxy-client.crt ...
	I1210 07:25:22.020135  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/proxy-client.crt: {Name:mk170e6f876c7bd4d99312da16ff5bcd9a092f47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:22.020328  379535 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/proxy-client.key ...
	I1210 07:25:22.020345  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/proxy-client.key: {Name:mkc75639790e7f9e05cc24c3d1c0a1a459121603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:22.020536  379535 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:25:22.020585  379535 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:25:22.020613  379535 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:25:22.020656  379535 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem (1675 bytes)
	I1210 07:25:22.021249  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:25:22.040846  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 07:25:22.059006  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:25:22.077418  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:25:22.095361  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 07:25:22.112988  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 07:25:22.130149  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:25:22.147224  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 07:25:22.164232  379535 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:25:22.181666  379535 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:25:22.194110  379535 ssh_runner.go:195] Run: openssl version
	I1210 07:25:22.200299  379535 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:25:22.208135  379535 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:25:22.215542  379535 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:25:22.219383  379535 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 07:25 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:25:22.219498  379535 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:25:22.261025  379535 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:25:22.268549  379535 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:25:22.275937  379535 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:25:22.279527  379535 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:25:22.279595  379535 kubeadm.go:401] StartCluster: {Name:addons-054300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-054300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:25:22.279695  379535 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:25:22.279783  379535 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:25:22.306924  379535 cri.go:89] found id: ""
	I1210 07:25:22.307001  379535 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:25:22.314946  379535 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:25:22.322827  379535 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:25:22.322891  379535 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:25:22.330873  379535 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:25:22.330903  379535 kubeadm.go:158] found existing configuration files:
	
	I1210 07:25:22.330959  379535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 07:25:22.338936  379535 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:25:22.339103  379535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:25:22.346675  379535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 07:25:22.354346  379535 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:25:22.354436  379535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:25:22.362242  379535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 07:25:22.370104  379535 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:25:22.370182  379535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:25:22.377766  379535 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 07:25:22.385673  379535 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:25:22.385787  379535 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:25:22.393336  379535 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:25:22.432359  379535 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 07:25:22.432454  379535 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:25:22.455812  379535 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:25:22.455913  379535 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:25:22.455975  379535 kubeadm.go:319] OS: Linux
	I1210 07:25:22.456062  379535 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:25:22.456150  379535 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:25:22.456214  379535 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:25:22.456271  379535 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:25:22.456327  379535 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:25:22.456382  379535 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:25:22.456434  379535 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:25:22.456489  379535 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:25:22.456547  379535 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:25:22.518675  379535 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:25:22.518827  379535 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:25:22.518959  379535 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:25:22.531396  379535 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:25:22.536637  379535 out.go:252]   - Generating certificates and keys ...
	I1210 07:25:22.536747  379535 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:25:22.536856  379535 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:25:22.860248  379535 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:25:23.796824  379535 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:25:23.896546  379535 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:25:24.409296  379535 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:25:24.623553  379535 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:25:24.623723  379535 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-054300 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 07:25:24.799890  379535 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:25:24.800068  379535 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-054300 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 07:25:25.268967  379535 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:25:25.583914  379535 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:25:25.907598  379535 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:25:25.907845  379535 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:25:26.166682  379535 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:25:26.771378  379535 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:25:26.870057  379535 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:25:27.567028  379535 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:25:28.426381  379535 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:25:28.426974  379535 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:25:28.429810  379535 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:25:28.434002  379535 out.go:252]   - Booting up control plane ...
	I1210 07:25:28.434113  379535 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:25:28.434191  379535 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:25:28.434268  379535 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:25:28.449101  379535 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:25:28.449449  379535 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:25:28.458111  379535 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:25:28.458235  379535 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:25:28.458302  379535 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:25:28.604351  379535 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:25:28.604473  379535 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:25:30.107599  379535 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501826536s
	I1210 07:25:30.109958  379535 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 07:25:30.110061  379535 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1210 07:25:30.110347  379535 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 07:25:30.110435  379535 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1210 07:25:33.927764  379535 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.816814046s
	I1210 07:25:34.618432  379535 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.50843364s
	I1210 07:25:36.611588  379535 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.50149902s
	I1210 07:25:36.645771  379535 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 07:25:36.661506  379535 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 07:25:36.679956  379535 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 07:25:36.680173  379535 kubeadm.go:319] [mark-control-plane] Marking the node addons-054300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 07:25:36.692487  379535 kubeadm.go:319] [bootstrap-token] Using token: j2595f.2uzlcpoq828sdy0t
	I1210 07:25:36.695598  379535 out.go:252]   - Configuring RBAC rules ...
	I1210 07:25:36.695729  379535 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 07:25:36.701812  379535 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 07:25:36.710331  379535 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 07:25:36.714509  379535 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 07:25:36.718647  379535 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 07:25:36.723470  379535 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 07:25:37.020842  379535 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 07:25:37.445442  379535 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1210 07:25:38.020051  379535 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1210 07:25:38.021570  379535 kubeadm.go:319] 
	I1210 07:25:38.021649  379535 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1210 07:25:38.021654  379535 kubeadm.go:319] 
	I1210 07:25:38.021733  379535 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1210 07:25:38.021737  379535 kubeadm.go:319] 
	I1210 07:25:38.021770  379535 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1210 07:25:38.021833  379535 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 07:25:38.021883  379535 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 07:25:38.021887  379535 kubeadm.go:319] 
	I1210 07:25:38.021941  379535 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1210 07:25:38.021945  379535 kubeadm.go:319] 
	I1210 07:25:38.021992  379535 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 07:25:38.021997  379535 kubeadm.go:319] 
	I1210 07:25:38.022048  379535 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1210 07:25:38.022133  379535 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 07:25:38.022202  379535 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 07:25:38.022206  379535 kubeadm.go:319] 
	I1210 07:25:38.022290  379535 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 07:25:38.022369  379535 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1210 07:25:38.022373  379535 kubeadm.go:319] 
	I1210 07:25:38.022457  379535 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token j2595f.2uzlcpoq828sdy0t \
	I1210 07:25:38.022560  379535 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:54503a554dcd3ad3945fa55f63c2936466b69a16c4d6182df26a96009ed0cd66 \
	I1210 07:25:38.022580  379535 kubeadm.go:319] 	--control-plane 
	I1210 07:25:38.022584  379535 kubeadm.go:319] 
	I1210 07:25:38.022669  379535 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1210 07:25:38.022673  379535 kubeadm.go:319] 
	I1210 07:25:38.022755  379535 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token j2595f.2uzlcpoq828sdy0t \
	I1210 07:25:38.022857  379535 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:54503a554dcd3ad3945fa55f63c2936466b69a16c4d6182df26a96009ed0cd66 
	I1210 07:25:38.026853  379535 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1210 07:25:38.027129  379535 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:25:38.027246  379535 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:25:38.027271  379535 cni.go:84] Creating CNI manager for ""
	I1210 07:25:38.027282  379535 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:25:38.030419  379535 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1210 07:25:38.033293  379535 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 07:25:38.038407  379535 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1210 07:25:38.038436  379535 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 07:25:38.054611  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 07:25:38.365291  379535 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 07:25:38.365451  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:25:38.365537  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-054300 minikube.k8s.io/updated_at=2025_12_10T07_25_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9 minikube.k8s.io/name=addons-054300 minikube.k8s.io/primary=true
	I1210 07:25:38.391817  379535 ops.go:34] apiserver oom_adj: -16
	I1210 07:25:38.570561  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:25:39.071452  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:25:39.571206  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:25:40.070811  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:25:40.570800  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:25:41.071282  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:25:41.571285  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:25:42.071130  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:25:42.571192  379535 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 07:25:42.748789  379535 kubeadm.go:1114] duration metric: took 4.383396627s to wait for elevateKubeSystemPrivileges
	I1210 07:25:42.748816  379535 kubeadm.go:403] duration metric: took 20.469243282s to StartCluster
	I1210 07:25:42.748834  379535 settings.go:142] acquiring lock: {Name:mk83336eaf1e9f7632e16e15e8d9e14eb0e0d0c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:42.748944  379535 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:25:42.749383  379535 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/kubeconfig: {Name:mk62f1b4a63164643ed31a5211abdadfc49db863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:25:42.749560  379535 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 07:25:42.749742  379535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 07:25:42.749997  379535 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:25:42.750026  379535 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1210 07:25:42.750096  379535 addons.go:70] Setting yakd=true in profile "addons-054300"
	I1210 07:25:42.750110  379535 addons.go:239] Setting addon yakd=true in "addons-054300"
	I1210 07:25:42.750132  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.750573  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.751055  379535 addons.go:70] Setting metrics-server=true in profile "addons-054300"
	I1210 07:25:42.751072  379535 addons.go:239] Setting addon metrics-server=true in "addons-054300"
	I1210 07:25:42.751093  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.751511  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.751693  379535 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-054300"
	I1210 07:25:42.751722  379535 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-054300"
	I1210 07:25:42.751746  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.752162  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.755088  379535 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-054300"
	I1210 07:25:42.755120  379535 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-054300"
	I1210 07:25:42.755162  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.755649  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.758822  379535 addons.go:70] Setting registry=true in profile "addons-054300"
	I1210 07:25:42.758901  379535 addons.go:239] Setting addon registry=true in "addons-054300"
	I1210 07:25:42.758949  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.759568  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.766084  379535 addons.go:70] Setting cloud-spanner=true in profile "addons-054300"
	I1210 07:25:42.766131  379535 addons.go:239] Setting addon cloud-spanner=true in "addons-054300"
	I1210 07:25:42.766167  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.766670  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.770271  379535 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-054300"
	I1210 07:25:42.770348  379535 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-054300"
	I1210 07:25:42.770379  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.770846  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.774729  379535 addons.go:70] Setting registry-creds=true in profile "addons-054300"
	I1210 07:25:42.774777  379535 addons.go:239] Setting addon registry-creds=true in "addons-054300"
	I1210 07:25:42.774814  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.775337  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.776781  379535 addons.go:70] Setting default-storageclass=true in profile "addons-054300"
	I1210 07:25:42.776844  379535 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-054300"
	I1210 07:25:42.777403  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.785668  379535 addons.go:70] Setting storage-provisioner=true in profile "addons-054300"
	I1210 07:25:42.785704  379535 addons.go:239] Setting addon storage-provisioner=true in "addons-054300"
	I1210 07:25:42.785744  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.786236  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.793721  379535 addons.go:70] Setting gcp-auth=true in profile "addons-054300"
	I1210 07:25:42.793768  379535 mustload.go:66] Loading cluster: addons-054300
	I1210 07:25:42.793976  379535 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:25:42.794235  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.797010  379535 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-054300"
	I1210 07:25:42.797046  379535 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-054300"
	I1210 07:25:42.797398  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.823327  379535 addons.go:70] Setting ingress=true in profile "addons-054300"
	I1210 07:25:42.823398  379535 addons.go:239] Setting addon ingress=true in "addons-054300"
	I1210 07:25:42.823467  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.824381  379535 addons.go:70] Setting volcano=true in profile "addons-054300"
	I1210 07:25:42.824410  379535 addons.go:239] Setting addon volcano=true in "addons-054300"
	I1210 07:25:42.824443  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.824929  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.827287  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.839584  379535 addons.go:70] Setting ingress-dns=true in profile "addons-054300"
	I1210 07:25:42.839621  379535 addons.go:239] Setting addon ingress-dns=true in "addons-054300"
	I1210 07:25:42.839670  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.840167  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.849889  379535 addons.go:70] Setting volumesnapshots=true in profile "addons-054300"
	I1210 07:25:42.849925  379535 addons.go:239] Setting addon volumesnapshots=true in "addons-054300"
	I1210 07:25:42.849960  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.850465  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.853368  379535 addons.go:70] Setting inspektor-gadget=true in profile "addons-054300"
	I1210 07:25:42.853398  379535 addons.go:239] Setting addon inspektor-gadget=true in "addons-054300"
	I1210 07:25:42.853431  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.853904  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.885681  379535 out.go:179] * Verifying Kubernetes components...
	I1210 07:25:42.888945  379535 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:25:42.935871  379535 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1210 07:25:42.954425  379535 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1210 07:25:42.964823  379535 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 07:25:42.964856  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1210 07:25:42.964942  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:42.975344  379535 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-054300"
	I1210 07:25:42.975390  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:42.975854  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:42.999370  379535 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 07:25:42.999399  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1210 07:25:42.999475  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.009494  379535 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1210 07:25:43.013279  379535 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 07:25:43.013475  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:43.023138  379535 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1210 07:25:43.023339  379535 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1210 07:25:43.023482  379535 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1210 07:25:43.050392  379535 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1210 07:25:43.054368  379535 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1210 07:25:43.054434  379535 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1210 07:25:43.054535  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.058795  379535 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 07:25:43.058820  379535 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 07:25:43.058893  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.065183  379535 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1210 07:25:43.065261  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1210 07:25:43.065362  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.077872  379535 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1210 07:25:43.082385  379535 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 07:25:43.082410  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1210 07:25:43.082477  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.085796  379535 out.go:179]   - Using image docker.io/registry:3.0.0
	I1210 07:25:43.089398  379535 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1210 07:25:43.089457  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1210 07:25:43.089538  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.093508  379535 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1210 07:25:43.099438  379535 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1210 07:25:43.103943  379535 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1210 07:25:43.108559  379535 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1210 07:25:43.113009  379535 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1210 07:25:43.115268  379535 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 07:25:43.115669  379535 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1210 07:25:43.116026  379535 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1210 07:25:43.125624  379535 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 07:25:43.125656  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1210 07:25:43.125715  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.138813  379535 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1210 07:25:43.141166  379535 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:25:43.141185  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:25:43.141247  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.157909  379535 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1210 07:25:43.162018  379535 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1210 07:25:43.162044  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1210 07:25:43.162111  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.167453  379535 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1210 07:25:43.167517  379535 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1210 07:25:43.167592  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.175304  379535 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1210 07:25:43.176780  379535 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1210 07:25:43.177726  379535 addons.go:239] Setting addon default-storageclass=true in "addons-054300"
	I1210 07:25:43.181023  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:43.181486  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:43.177764  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.185884  379535 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 07:25:43.185911  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1210 07:25:43.185979  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.186658  379535 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1210 07:25:43.219889  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.257669  379535 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1210 07:25:43.261393  379535 out.go:179]   - Using image docker.io/busybox:stable
	I1210 07:25:43.261452  379535 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1210 07:25:43.264178  379535 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1210 07:25:43.264212  379535 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1210 07:25:43.264288  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.264799  379535 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 07:25:43.264810  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1210 07:25:43.264855  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.289447  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.360362  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.363518  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.364848  379535 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:25:43.364864  379535 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:25:43.365000  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:43.370413  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.374646  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.378905  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.383495  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.414873  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.435283  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.438931  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.446038  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.452982  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:43.461538  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	W1210 07:25:43.464745  379535 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1210 07:25:43.464781  379535 retry.go:31] will retry after 308.729586ms: ssh: handshake failed: EOF
	W1210 07:25:43.465808  379535 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1210 07:25:43.465830  379535 retry.go:31] will retry after 139.182681ms: ssh: handshake failed: EOF
	W1210 07:25:43.466555  379535 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1210 07:25:43.466573  379535 retry.go:31] will retry after 151.307701ms: ssh: handshake failed: EOF
	W1210 07:25:43.466916  379535 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1210 07:25:43.466930  379535 retry.go:31] will retry after 336.970388ms: ssh: handshake failed: EOF
	I1210 07:25:43.510736  379535 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 07:25:43.510821  379535 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1210 07:25:43.606200  379535 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1210 07:25:43.606227  379535 retry.go:31] will retry after 360.804886ms: ssh: handshake failed: EOF
	I1210 07:25:43.816827  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1210 07:25:44.078612  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1210 07:25:44.079751  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1210 07:25:44.090200  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1210 07:25:44.101807  379535 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1210 07:25:44.101875  379535 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1210 07:25:44.108831  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:25:44.135445  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1210 07:25:44.199592  379535 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 07:25:44.199624  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1210 07:25:44.204341  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1210 07:25:44.245447  379535 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1210 07:25:44.245473  379535 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1210 07:25:44.254347  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1210 07:25:44.281849  379535 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1210 07:25:44.281875  379535 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1210 07:25:44.284931  379535 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1210 07:25:44.284955  379535 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1210 07:25:44.356633  379535 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 07:25:44.356661  379535 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 07:25:44.363967  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1210 07:25:44.380532  379535 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1210 07:25:44.380559  379535 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1210 07:25:44.520190  379535 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 07:25:44.520217  379535 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 07:25:44.522478  379535 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1210 07:25:44.522500  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1210 07:25:44.524639  379535 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1210 07:25:44.524663  379535 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1210 07:25:44.614484  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:25:44.628458  379535 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1210 07:25:44.628529  379535 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1210 07:25:44.629390  379535 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1210 07:25:44.629445  379535 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1210 07:25:44.714017  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 07:25:44.722115  379535 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1210 07:25:44.722181  379535 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1210 07:25:44.742852  379535 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1210 07:25:44.742933  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1210 07:25:44.749839  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1210 07:25:44.845846  379535 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.334978518s)
	I1210 07:25:44.846628  379535 node_ready.go:35] waiting up to 6m0s for node "addons-054300" to be "Ready" ...
	I1210 07:25:44.846706  379535 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.33593965s)
	I1210 07:25:44.846798  379535 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1210 07:25:44.944488  379535 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1210 07:25:44.944566  379535 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1210 07:25:45.017588  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.200719268s)
	I1210 07:25:45.026606  379535 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1210 07:25:45.026690  379535 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1210 07:25:45.036267  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1210 07:25:45.137819  379535 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1210 07:25:45.137926  379535 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1210 07:25:45.224194  379535 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1210 07:25:45.224374  379535 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1210 07:25:45.341278  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.262586976s)
	I1210 07:25:45.356582  379535 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-054300" context rescaled to 1 replicas
	I1210 07:25:45.379609  379535 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 07:25:45.379681  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1210 07:25:45.429938  379535 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1210 07:25:45.430010  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1210 07:25:45.446722  379535 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1210 07:25:45.446792  379535 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1210 07:25:45.461045  379535 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1210 07:25:45.461115  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1210 07:25:45.544427  379535 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1210 07:25:45.544501  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1210 07:25:45.567400  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 07:25:45.597184  379535 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1210 07:25:45.597265  379535 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1210 07:25:45.789118  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1210 07:25:46.852206  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:25:48.066955  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (3.987135882s)
	I1210 07:25:48.067061  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.976798742s)
	I1210 07:25:48.067114  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.958214639s)
	I1210 07:25:48.067151  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.931684581s)
	W1210 07:25:48.860299  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:25:48.961010  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.756631762s)
	I1210 07:25:48.961043  379535 addons.go:495] Verifying addon ingress=true in "addons-054300"
	I1210 07:25:48.961225  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.706849852s)
	I1210 07:25:48.961444  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.597444765s)
	I1210 07:25:48.961490  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.346985448s)
	I1210 07:25:48.961623  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.247518359s)
	I1210 07:25:48.961640  379535 addons.go:495] Verifying addon metrics-server=true in "addons-054300"
	I1210 07:25:48.961667  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.211768102s)
	I1210 07:25:48.961678  379535 addons.go:495] Verifying addon registry=true in "addons-054300"
	I1210 07:25:48.962039  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.925681048s)
	I1210 07:25:48.962296  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.394818158s)
	W1210 07:25:48.962321  379535 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 07:25:48.962341  379535 retry.go:31] will retry after 259.193709ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1210 07:25:48.965103  379535 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-054300 service yakd-dashboard -n yakd-dashboard
	
	I1210 07:25:48.965245  379535 out.go:179] * Verifying registry addon...
	I1210 07:25:48.965290  379535 out.go:179] * Verifying ingress addon...
	I1210 07:25:48.970623  379535 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1210 07:25:48.970698  379535 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W1210 07:25:48.979123  379535 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1210 07:25:48.979775  379535 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 07:25:48.979790  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:48.980042  379535 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1210 07:25:48.980050  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:49.221906  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1210 07:25:49.228714  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.439512237s)
	I1210 07:25:49.228749  379535 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-054300"
	I1210 07:25:49.231984  379535 out.go:179] * Verifying csi-hostpath-driver addon...
	I1210 07:25:49.236414  379535 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1210 07:25:49.253799  379535 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 07:25:49.253872  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:49.475112  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:49.475745  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:49.740232  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:49.974842  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:49.975185  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:50.240742  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:50.474800  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:50.475192  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:50.631155  379535 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1210 07:25:50.631319  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:50.651100  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:50.740624  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:50.757512  379535 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1210 07:25:50.770511  379535 addons.go:239] Setting addon gcp-auth=true in "addons-054300"
	I1210 07:25:50.770559  379535 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:25:50.771044  379535 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:25:50.788341  379535 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1210 07:25:50.788401  379535 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:25:50.809701  379535 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:25:50.974580  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:50.974862  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:51.239648  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:25:51.350301  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:25:51.474741  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:51.475125  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:51.741212  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:51.975869  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:51.976141  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:52.005853  379535 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.783894619s)
	I1210 07:25:52.005991  379535 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.21762259s)
	I1210 07:25:52.009250  379535 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1210 07:25:52.012090  379535 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1210 07:25:52.014882  379535 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1210 07:25:52.014913  379535 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1210 07:25:52.030261  379535 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1210 07:25:52.030288  379535 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1210 07:25:52.045991  379535 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 07:25:52.046068  379535 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1210 07:25:52.060219  379535 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1210 07:25:52.239863  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:52.481877  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:52.482708  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:52.554826  379535 addons.go:495] Verifying addon gcp-auth=true in "addons-054300"
	I1210 07:25:52.558570  379535 out.go:179] * Verifying gcp-auth addon...
	I1210 07:25:52.563199  379535 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1210 07:25:52.577567  379535 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1210 07:25:52.577601  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:52.739857  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:52.974557  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:52.975047  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:53.075600  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:53.239189  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:53.474606  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:53.474931  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:53.566881  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:53.739956  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:25:53.849804  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:25:53.974081  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:53.974623  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:54.066524  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:54.239620  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:54.474013  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:54.474384  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:54.566228  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:54.740111  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:54.974323  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:54.974499  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:55.066606  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:55.239581  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:55.475049  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:55.475137  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:55.566986  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:55.739821  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:55.975082  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:55.975237  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:56.066224  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:56.239148  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:25:56.349795  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:25:56.474582  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:56.475120  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:56.566910  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:56.740124  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:56.973905  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:56.974121  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:57.066014  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:57.240239  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:57.474586  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:57.474646  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:57.566571  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:57.739715  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:57.973733  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:57.973916  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:58.066765  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:58.239590  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:58.474502  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:58.474692  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:58.566649  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:58.739528  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:25:58.850128  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:25:58.974222  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:58.974503  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:59.066479  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:59.239145  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:59.474630  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:59.474797  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:25:59.566711  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:25:59.739611  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:25:59.974444  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:25:59.974908  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:00.079545  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:00.248389  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:00.473960  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:00.474470  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:00.566256  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:00.740335  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:26:00.850691  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:26:00.974043  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:00.974243  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:01.066852  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:01.240362  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:01.474497  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:01.474663  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:01.566494  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:01.739669  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:01.974838  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:01.975319  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:02.066943  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:02.240252  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:02.474313  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:02.474596  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:02.566593  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:02.739521  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:02.974612  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:02.974987  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:03.066721  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:03.239807  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:26:03.349828  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:26:03.474790  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:03.474856  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:03.567083  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:03.739899  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:03.974397  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:03.974603  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:04.074764  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:04.239830  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:04.474334  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:04.474724  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:04.566520  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:04.739336  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:04.974447  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:04.974553  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:05.066496  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:05.239568  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:05.475094  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:05.475795  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:05.566670  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:05.739828  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:26:05.849919  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:26:05.973919  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:05.973970  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:06.067055  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:06.239904  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:06.474363  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:06.474433  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:06.566512  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:06.739292  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:06.974262  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:06.974623  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:07.067047  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:07.240144  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:07.475513  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:07.475607  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:07.566444  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:07.739328  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:26:07.850119  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:26:07.974557  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:07.974911  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:08.066956  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:08.240121  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:08.474962  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:08.475115  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:08.566772  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:08.740541  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:08.974915  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:08.974973  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:09.066443  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:09.239271  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:09.474582  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:09.474782  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:09.566293  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:09.739433  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:09.974043  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:09.974201  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:10.067853  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:10.239884  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:26:10.349517  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:26:10.478676  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:10.481342  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:10.566707  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:10.739800  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:10.974899  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:10.975200  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:11.066797  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:11.239697  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:11.474610  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:11.475080  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:11.570718  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:11.739888  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:11.974645  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:11.975056  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:12.066894  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:12.239840  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:26:12.349576  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:26:12.473931  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:12.474204  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:12.565982  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:12.740010  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:12.973699  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:12.974394  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:13.066469  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:13.239347  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:13.475139  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:13.475534  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:13.566308  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:13.740258  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:13.975626  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:13.979428  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:14.066481  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:14.239702  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:26:14.350607  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:26:14.474586  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:14.475558  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:14.566383  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:14.739376  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:14.974989  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:14.975445  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:15.066372  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:15.239234  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:15.475091  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:15.475482  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:15.566275  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:15.739389  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:15.975185  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:15.975277  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:16.067055  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:16.240069  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:16.475393  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:16.476633  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:16.566247  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:16.739103  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:26:16.849757  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:26:16.974221  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:16.974402  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:17.066549  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:17.239823  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:17.474104  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:17.474227  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:17.566079  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:17.740104  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:17.974304  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:17.974591  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:18.066741  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:18.240743  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:18.474224  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:18.474878  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:18.566921  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:18.739868  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:26:18.849890  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:26:18.974356  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:18.974560  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:19.066381  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:19.240129  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:19.474285  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:19.474423  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:19.566113  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:19.739972  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:19.974723  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:19.975260  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:20.066226  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:20.240656  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:20.474292  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:20.475055  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:20.566028  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:20.740244  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:26:20.850077  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:26:20.974244  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:20.974606  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:21.066628  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:21.239578  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:21.474325  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:21.474475  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:21.566494  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:21.739610  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:21.974483  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:21.974680  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:22.066777  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:22.239689  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:22.474224  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:22.474393  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:22.566202  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:22.740598  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:22.974669  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:22.975000  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:23.066878  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:23.239664  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1210 07:26:23.351613  379535 node_ready.go:57] node "addons-054300" has "Ready":"False" status (will retry)
	I1210 07:26:23.473807  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:23.474358  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:23.566134  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:23.739982  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:23.974811  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:23.975215  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:24.067056  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:24.240028  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:24.375728  379535 node_ready.go:49] node "addons-054300" is "Ready"
	I1210 07:26:24.375760  379535 node_ready.go:38] duration metric: took 39.529026157s for node "addons-054300" to be "Ready" ...
	I1210 07:26:24.375773  379535 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:26:24.375863  379535 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:26:24.399205  379535 api_server.go:72] duration metric: took 41.649618356s to wait for apiserver process to appear ...
	I1210 07:26:24.399234  379535 api_server.go:88] waiting for apiserver healthz status ...
	I1210 07:26:24.399276  379535 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1210 07:26:24.479720  379535 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1210 07:26:24.485954  379535 api_server.go:141] control plane version: v1.34.2
	I1210 07:26:24.485981  379535 api_server.go:131] duration metric: took 86.740423ms to wait for apiserver health ...
	I1210 07:26:24.485990  379535 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 07:26:24.492998  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:24.508707  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:24.522627  379535 system_pods.go:59] 19 kube-system pods found
	I1210 07:26:24.522714  379535 system_pods.go:61] "coredns-66bc5c9577-4tklf" [7ab27a4b-c0cb-4297-82f8-c643698c8a55] Pending
	I1210 07:26:24.522737  379535 system_pods.go:61] "csi-hostpath-attacher-0" [1524b246-3702-4d62-9e4f-f22ed995293b] Pending
	I1210 07:26:24.522758  379535 system_pods.go:61] "csi-hostpath-resizer-0" [b9128a18-cb5f-4edb-8b6e-4ed22a1b87d4] Pending
	I1210 07:26:24.522791  379535 system_pods.go:61] "csi-hostpathplugin-bmkhb" [d1ebcc23-869d-4c06-8782-2482f30c7f7d] Pending
	I1210 07:26:24.522821  379535 system_pods.go:61] "etcd-addons-054300" [42d2bb8e-456d-4b5a-9b2f-87360b75f14d] Running
	I1210 07:26:24.522840  379535 system_pods.go:61] "kindnet-b47q8" [7e7edb78-2cfe-452f-b21c-fa34d8218fd7] Running
	I1210 07:26:24.522867  379535 system_pods.go:61] "kube-apiserver-addons-054300" [bf33d55e-6624-432d-9608-557cb01aa3fe] Running
	I1210 07:26:24.522885  379535 system_pods.go:61] "kube-controller-manager-addons-054300" [38540bf7-8aa3-4aeb-99bd-83b46ba127de] Running
	I1210 07:26:24.522913  379535 system_pods.go:61] "kube-ingress-dns-minikube" [fff30e1b-9736-4f98-b368-35e6ca0b24c2] Pending
	I1210 07:26:24.522942  379535 system_pods.go:61] "kube-proxy-lt4ld" [3f73faff-6f1d-4fc9-bfc8-42c1263bac55] Running
	I1210 07:26:24.522960  379535 system_pods.go:61] "kube-scheduler-addons-054300" [0c33a60e-04de-44e7-a88d-7d1cfaa2dfae] Running
	I1210 07:26:24.522977  379535 system_pods.go:61] "metrics-server-85b7d694d7-pcvgr" [c9ff9565-6dd1-4ac7-a023-ba2b2e1325bd] Pending
	I1210 07:26:24.522997  379535 system_pods.go:61] "nvidia-device-plugin-daemonset-jgw4d" [f73a2f9f-873e-49a1-b56a-3b4e891849be] Pending
	I1210 07:26:24.523041  379535 system_pods.go:61] "registry-6b586f9694-rgr2q" [baa76559-de43-4366-a7af-949a6b1936c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 07:26:24.523064  379535 system_pods.go:61] "registry-creds-764b6fb674-7pk58" [818632b4-4e74-4c94-82c0-6e672524abcb] Pending
	I1210 07:26:24.523085  379535 system_pods.go:61] "registry-proxy-x77gq" [2e689e98-e3b0-48c0-9457-3d5b5f99e3df] Pending
	I1210 07:26:24.523115  379535 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9c9b8" [bb02026d-9028-430d-8610-4b6bb683900c] Pending
	I1210 07:26:24.523136  379535 system_pods.go:61] "snapshot-controller-7d9fbc56b8-p5w2h" [a7bf4d61-ab59-4701-aaa4-0af2845b0906] Pending
	I1210 07:26:24.523154  379535 system_pods.go:61] "storage-provisioner" [c8c5b10c-5e2d-457e-8703-b06f0ac1ea85] Pending
	I1210 07:26:24.523191  379535 system_pods.go:74] duration metric: took 37.184148ms to wait for pod list to return data ...
	I1210 07:26:24.523214  379535 default_sa.go:34] waiting for default service account to be created ...
	I1210 07:26:24.537021  379535 default_sa.go:45] found service account: "default"
	I1210 07:26:24.537099  379535 default_sa.go:55] duration metric: took 13.865785ms for default service account to be created ...
	I1210 07:26:24.537127  379535 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 07:26:24.549993  379535 system_pods.go:86] 19 kube-system pods found
	I1210 07:26:24.550028  379535 system_pods.go:89] "coredns-66bc5c9577-4tklf" [7ab27a4b-c0cb-4297-82f8-c643698c8a55] Pending
	I1210 07:26:24.550035  379535 system_pods.go:89] "csi-hostpath-attacher-0" [1524b246-3702-4d62-9e4f-f22ed995293b] Pending
	I1210 07:26:24.550061  379535 system_pods.go:89] "csi-hostpath-resizer-0" [b9128a18-cb5f-4edb-8b6e-4ed22a1b87d4] Pending
	I1210 07:26:24.550068  379535 system_pods.go:89] "csi-hostpathplugin-bmkhb" [d1ebcc23-869d-4c06-8782-2482f30c7f7d] Pending
	I1210 07:26:24.550073  379535 system_pods.go:89] "etcd-addons-054300" [42d2bb8e-456d-4b5a-9b2f-87360b75f14d] Running
	I1210 07:26:24.550079  379535 system_pods.go:89] "kindnet-b47q8" [7e7edb78-2cfe-452f-b21c-fa34d8218fd7] Running
	I1210 07:26:24.550083  379535 system_pods.go:89] "kube-apiserver-addons-054300" [bf33d55e-6624-432d-9608-557cb01aa3fe] Running
	I1210 07:26:24.550087  379535 system_pods.go:89] "kube-controller-manager-addons-054300" [38540bf7-8aa3-4aeb-99bd-83b46ba127de] Running
	I1210 07:26:24.550103  379535 system_pods.go:89] "kube-ingress-dns-minikube" [fff30e1b-9736-4f98-b368-35e6ca0b24c2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 07:26:24.550109  379535 system_pods.go:89] "kube-proxy-lt4ld" [3f73faff-6f1d-4fc9-bfc8-42c1263bac55] Running
	I1210 07:26:24.550120  379535 system_pods.go:89] "kube-scheduler-addons-054300" [0c33a60e-04de-44e7-a88d-7d1cfaa2dfae] Running
	I1210 07:26:24.550124  379535 system_pods.go:89] "metrics-server-85b7d694d7-pcvgr" [c9ff9565-6dd1-4ac7-a023-ba2b2e1325bd] Pending
	I1210 07:26:24.550148  379535 system_pods.go:89] "nvidia-device-plugin-daemonset-jgw4d" [f73a2f9f-873e-49a1-b56a-3b4e891849be] Pending
	I1210 07:26:24.550155  379535 system_pods.go:89] "registry-6b586f9694-rgr2q" [baa76559-de43-4366-a7af-949a6b1936c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 07:26:24.550159  379535 system_pods.go:89] "registry-creds-764b6fb674-7pk58" [818632b4-4e74-4c94-82c0-6e672524abcb] Pending
	I1210 07:26:24.550178  379535 system_pods.go:89] "registry-proxy-x77gq" [2e689e98-e3b0-48c0-9457-3d5b5f99e3df] Pending
	I1210 07:26:24.550182  379535 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9c9b8" [bb02026d-9028-430d-8610-4b6bb683900c] Pending
	I1210 07:26:24.550194  379535 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p5w2h" [a7bf4d61-ab59-4701-aaa4-0af2845b0906] Pending
	I1210 07:26:24.550198  379535 system_pods.go:89] "storage-provisioner" [c8c5b10c-5e2d-457e-8703-b06f0ac1ea85] Pending
	I1210 07:26:24.550214  379535 retry.go:31] will retry after 209.733003ms: missing components: kube-dns
	I1210 07:26:24.582574  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:24.805578  379535 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1210 07:26:24.805650  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:24.812816  379535 system_pods.go:86] 19 kube-system pods found
	I1210 07:26:24.812899  379535 system_pods.go:89] "coredns-66bc5c9577-4tklf" [7ab27a4b-c0cb-4297-82f8-c643698c8a55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:26:24.812923  379535 system_pods.go:89] "csi-hostpath-attacher-0" [1524b246-3702-4d62-9e4f-f22ed995293b] Pending
	I1210 07:26:24.812947  379535 system_pods.go:89] "csi-hostpath-resizer-0" [b9128a18-cb5f-4edb-8b6e-4ed22a1b87d4] Pending
	I1210 07:26:24.812981  379535 system_pods.go:89] "csi-hostpathplugin-bmkhb" [d1ebcc23-869d-4c06-8782-2482f30c7f7d] Pending
	I1210 07:26:24.813000  379535 system_pods.go:89] "etcd-addons-054300" [42d2bb8e-456d-4b5a-9b2f-87360b75f14d] Running
	I1210 07:26:24.813019  379535 system_pods.go:89] "kindnet-b47q8" [7e7edb78-2cfe-452f-b21c-fa34d8218fd7] Running
	I1210 07:26:24.813037  379535 system_pods.go:89] "kube-apiserver-addons-054300" [bf33d55e-6624-432d-9608-557cb01aa3fe] Running
	I1210 07:26:24.813067  379535 system_pods.go:89] "kube-controller-manager-addons-054300" [38540bf7-8aa3-4aeb-99bd-83b46ba127de] Running
	I1210 07:26:24.813089  379535 system_pods.go:89] "kube-ingress-dns-minikube" [fff30e1b-9736-4f98-b368-35e6ca0b24c2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 07:26:24.813108  379535 system_pods.go:89] "kube-proxy-lt4ld" [3f73faff-6f1d-4fc9-bfc8-42c1263bac55] Running
	I1210 07:26:24.813128  379535 system_pods.go:89] "kube-scheduler-addons-054300" [0c33a60e-04de-44e7-a88d-7d1cfaa2dfae] Running
	I1210 07:26:24.813164  379535 system_pods.go:89] "metrics-server-85b7d694d7-pcvgr" [c9ff9565-6dd1-4ac7-a023-ba2b2e1325bd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 07:26:24.813183  379535 system_pods.go:89] "nvidia-device-plugin-daemonset-jgw4d" [f73a2f9f-873e-49a1-b56a-3b4e891849be] Pending
	I1210 07:26:24.813204  379535 system_pods.go:89] "registry-6b586f9694-rgr2q" [baa76559-de43-4366-a7af-949a6b1936c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 07:26:24.813222  379535 system_pods.go:89] "registry-creds-764b6fb674-7pk58" [818632b4-4e74-4c94-82c0-6e672524abcb] Pending
	I1210 07:26:24.813261  379535 system_pods.go:89] "registry-proxy-x77gq" [2e689e98-e3b0-48c0-9457-3d5b5f99e3df] Pending
	I1210 07:26:24.813281  379535 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9c9b8" [bb02026d-9028-430d-8610-4b6bb683900c] Pending
	I1210 07:26:24.813302  379535 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p5w2h" [a7bf4d61-ab59-4701-aaa4-0af2845b0906] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 07:26:24.813336  379535 system_pods.go:89] "storage-provisioner" [c8c5b10c-5e2d-457e-8703-b06f0ac1ea85] Pending
	I1210 07:26:24.813367  379535 retry.go:31] will retry after 271.44037ms: missing components: kube-dns
	I1210 07:26:24.984193  379535 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1210 07:26:24.984288  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:24.984264  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:25.074422  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:25.101318  379535 system_pods.go:86] 19 kube-system pods found
	I1210 07:26:25.101417  379535 system_pods.go:89] "coredns-66bc5c9577-4tklf" [7ab27a4b-c0cb-4297-82f8-c643698c8a55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:26:25.101440  379535 system_pods.go:89] "csi-hostpath-attacher-0" [1524b246-3702-4d62-9e4f-f22ed995293b] Pending
	I1210 07:26:25.101462  379535 system_pods.go:89] "csi-hostpath-resizer-0" [b9128a18-cb5f-4edb-8b6e-4ed22a1b87d4] Pending
	I1210 07:26:25.101493  379535 system_pods.go:89] "csi-hostpathplugin-bmkhb" [d1ebcc23-869d-4c06-8782-2482f30c7f7d] Pending
	I1210 07:26:25.101512  379535 system_pods.go:89] "etcd-addons-054300" [42d2bb8e-456d-4b5a-9b2f-87360b75f14d] Running
	I1210 07:26:25.101533  379535 system_pods.go:89] "kindnet-b47q8" [7e7edb78-2cfe-452f-b21c-fa34d8218fd7] Running
	I1210 07:26:25.101552  379535 system_pods.go:89] "kube-apiserver-addons-054300" [bf33d55e-6624-432d-9608-557cb01aa3fe] Running
	I1210 07:26:25.101580  379535 system_pods.go:89] "kube-controller-manager-addons-054300" [38540bf7-8aa3-4aeb-99bd-83b46ba127de] Running
	I1210 07:26:25.101602  379535 system_pods.go:89] "kube-ingress-dns-minikube" [fff30e1b-9736-4f98-b368-35e6ca0b24c2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 07:26:25.101632  379535 system_pods.go:89] "kube-proxy-lt4ld" [3f73faff-6f1d-4fc9-bfc8-42c1263bac55] Running
	I1210 07:26:25.101662  379535 system_pods.go:89] "kube-scheduler-addons-054300" [0c33a60e-04de-44e7-a88d-7d1cfaa2dfae] Running
	I1210 07:26:25.101689  379535 system_pods.go:89] "metrics-server-85b7d694d7-pcvgr" [c9ff9565-6dd1-4ac7-a023-ba2b2e1325bd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 07:26:25.101712  379535 system_pods.go:89] "nvidia-device-plugin-daemonset-jgw4d" [f73a2f9f-873e-49a1-b56a-3b4e891849be] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 07:26:25.101745  379535 system_pods.go:89] "registry-6b586f9694-rgr2q" [baa76559-de43-4366-a7af-949a6b1936c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 07:26:25.101767  379535 system_pods.go:89] "registry-creds-764b6fb674-7pk58" [818632b4-4e74-4c94-82c0-6e672524abcb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 07:26:25.101791  379535 system_pods.go:89] "registry-proxy-x77gq" [2e689e98-e3b0-48c0-9457-3d5b5f99e3df] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 07:26:25.101828  379535 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9c9b8" [bb02026d-9028-430d-8610-4b6bb683900c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 07:26:25.101853  379535 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p5w2h" [a7bf4d61-ab59-4701-aaa4-0af2845b0906] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 07:26:25.101877  379535 system_pods.go:89] "storage-provisioner" [c8c5b10c-5e2d-457e-8703-b06f0ac1ea85] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:26:25.101919  379535 retry.go:31] will retry after 340.731568ms: missing components: kube-dns
	I1210 07:26:25.242498  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:25.477132  379535 system_pods.go:86] 19 kube-system pods found
	I1210 07:26:25.477210  379535 system_pods.go:89] "coredns-66bc5c9577-4tklf" [7ab27a4b-c0cb-4297-82f8-c643698c8a55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 07:26:25.477232  379535 system_pods.go:89] "csi-hostpath-attacher-0" [1524b246-3702-4d62-9e4f-f22ed995293b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 07:26:25.477255  379535 system_pods.go:89] "csi-hostpath-resizer-0" [b9128a18-cb5f-4edb-8b6e-4ed22a1b87d4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 07:26:25.477293  379535 system_pods.go:89] "csi-hostpathplugin-bmkhb" [d1ebcc23-869d-4c06-8782-2482f30c7f7d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 07:26:25.477319  379535 system_pods.go:89] "etcd-addons-054300" [42d2bb8e-456d-4b5a-9b2f-87360b75f14d] Running
	I1210 07:26:25.477339  379535 system_pods.go:89] "kindnet-b47q8" [7e7edb78-2cfe-452f-b21c-fa34d8218fd7] Running
	I1210 07:26:25.477357  379535 system_pods.go:89] "kube-apiserver-addons-054300" [bf33d55e-6624-432d-9608-557cb01aa3fe] Running
	I1210 07:26:25.477376  379535 system_pods.go:89] "kube-controller-manager-addons-054300" [38540bf7-8aa3-4aeb-99bd-83b46ba127de] Running
	I1210 07:26:25.477409  379535 system_pods.go:89] "kube-ingress-dns-minikube" [fff30e1b-9736-4f98-b368-35e6ca0b24c2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 07:26:25.477437  379535 system_pods.go:89] "kube-proxy-lt4ld" [3f73faff-6f1d-4fc9-bfc8-42c1263bac55] Running
	I1210 07:26:25.477461  379535 system_pods.go:89] "kube-scheduler-addons-054300" [0c33a60e-04de-44e7-a88d-7d1cfaa2dfae] Running
	I1210 07:26:25.477484  379535 system_pods.go:89] "metrics-server-85b7d694d7-pcvgr" [c9ff9565-6dd1-4ac7-a023-ba2b2e1325bd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 07:26:25.477515  379535 system_pods.go:89] "nvidia-device-plugin-daemonset-jgw4d" [f73a2f9f-873e-49a1-b56a-3b4e891849be] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 07:26:25.477549  379535 system_pods.go:89] "registry-6b586f9694-rgr2q" [baa76559-de43-4366-a7af-949a6b1936c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 07:26:25.477569  379535 system_pods.go:89] "registry-creds-764b6fb674-7pk58" [818632b4-4e74-4c94-82c0-6e672524abcb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 07:26:25.477588  379535 system_pods.go:89] "registry-proxy-x77gq" [2e689e98-e3b0-48c0-9457-3d5b5f99e3df] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 07:26:25.477619  379535 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9c9b8" [bb02026d-9028-430d-8610-4b6bb683900c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 07:26:25.477641  379535 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p5w2h" [a7bf4d61-ab59-4701-aaa4-0af2845b0906] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 07:26:25.477662  379535 system_pods.go:89] "storage-provisioner" [c8c5b10c-5e2d-457e-8703-b06f0ac1ea85] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 07:26:25.477691  379535 retry.go:31] will retry after 396.925776ms: missing components: kube-dns
	I1210 07:26:25.564942  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:25.565686  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:25.567599  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:25.743248  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:25.883810  379535 system_pods.go:86] 19 kube-system pods found
	I1210 07:26:25.883861  379535 system_pods.go:89] "coredns-66bc5c9577-4tklf" [7ab27a4b-c0cb-4297-82f8-c643698c8a55] Running
	I1210 07:26:25.883874  379535 system_pods.go:89] "csi-hostpath-attacher-0" [1524b246-3702-4d62-9e4f-f22ed995293b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1210 07:26:25.883888  379535 system_pods.go:89] "csi-hostpath-resizer-0" [b9128a18-cb5f-4edb-8b6e-4ed22a1b87d4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1210 07:26:25.883904  379535 system_pods.go:89] "csi-hostpathplugin-bmkhb" [d1ebcc23-869d-4c06-8782-2482f30c7f7d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1210 07:26:25.883930  379535 system_pods.go:89] "etcd-addons-054300" [42d2bb8e-456d-4b5a-9b2f-87360b75f14d] Running
	I1210 07:26:25.883946  379535 system_pods.go:89] "kindnet-b47q8" [7e7edb78-2cfe-452f-b21c-fa34d8218fd7] Running
	I1210 07:26:25.883951  379535 system_pods.go:89] "kube-apiserver-addons-054300" [bf33d55e-6624-432d-9608-557cb01aa3fe] Running
	I1210 07:26:25.883956  379535 system_pods.go:89] "kube-controller-manager-addons-054300" [38540bf7-8aa3-4aeb-99bd-83b46ba127de] Running
	I1210 07:26:25.883974  379535 system_pods.go:89] "kube-ingress-dns-minikube" [fff30e1b-9736-4f98-b368-35e6ca0b24c2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1210 07:26:25.883986  379535 system_pods.go:89] "kube-proxy-lt4ld" [3f73faff-6f1d-4fc9-bfc8-42c1263bac55] Running
	I1210 07:26:25.883992  379535 system_pods.go:89] "kube-scheduler-addons-054300" [0c33a60e-04de-44e7-a88d-7d1cfaa2dfae] Running
	I1210 07:26:25.884005  379535 system_pods.go:89] "metrics-server-85b7d694d7-pcvgr" [c9ff9565-6dd1-4ac7-a023-ba2b2e1325bd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 07:26:25.884020  379535 system_pods.go:89] "nvidia-device-plugin-daemonset-jgw4d" [f73a2f9f-873e-49a1-b56a-3b4e891849be] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1210 07:26:25.884027  379535 system_pods.go:89] "registry-6b586f9694-rgr2q" [baa76559-de43-4366-a7af-949a6b1936c6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1210 07:26:25.884040  379535 system_pods.go:89] "registry-creds-764b6fb674-7pk58" [818632b4-4e74-4c94-82c0-6e672524abcb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1210 07:26:25.884047  379535 system_pods.go:89] "registry-proxy-x77gq" [2e689e98-e3b0-48c0-9457-3d5b5f99e3df] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1210 07:26:25.884058  379535 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9c9b8" [bb02026d-9028-430d-8610-4b6bb683900c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 07:26:25.884080  379535 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p5w2h" [a7bf4d61-ab59-4701-aaa4-0af2845b0906] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1210 07:26:25.884085  379535 system_pods.go:89] "storage-provisioner" [c8c5b10c-5e2d-457e-8703-b06f0ac1ea85] Running
	I1210 07:26:25.884098  379535 system_pods.go:126] duration metric: took 1.346951034s to wait for k8s-apps to be running ...
	I1210 07:26:25.884111  379535 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 07:26:25.884192  379535 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:26:25.904125  379535 system_svc.go:56] duration metric: took 20.003947ms WaitForService to wait for kubelet
	I1210 07:26:25.904165  379535 kubeadm.go:587] duration metric: took 43.154573113s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:26:25.904187  379535 node_conditions.go:102] verifying NodePressure condition ...
	I1210 07:26:25.907800  379535 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1210 07:26:25.907871  379535 node_conditions.go:123] node cpu capacity is 2
	I1210 07:26:25.907900  379535 node_conditions.go:105] duration metric: took 3.702244ms to run NodePressure ...
	I1210 07:26:25.907924  379535 start.go:242] waiting for startup goroutines ...
	I1210 07:26:25.975802  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:25.976158  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:26.067213  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:26.241306  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:26.477102  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:26.477584  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:26.567491  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:26.740422  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:26.976913  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:26.977413  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:27.066776  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:27.241466  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:27.476093  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:27.476538  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:27.566996  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:27.740736  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:27.975397  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:27.976786  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:28.067834  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:28.241029  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:28.476284  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:28.476879  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:28.567309  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:28.740984  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:28.976294  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:28.976689  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:29.067033  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:29.241064  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:29.475139  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:29.475278  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:29.566515  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:29.740851  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:29.975744  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:29.976953  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:30.067857  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:30.240731  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:30.475192  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:30.475468  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:30.566491  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:30.739751  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:30.975392  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:30.975687  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:31.066770  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:31.244038  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:31.475171  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:31.475402  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:31.566515  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:31.739440  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:31.975328  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:31.975434  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:32.066642  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:32.240668  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:32.475086  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:32.475267  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:32.566696  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:32.740263  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:32.976435  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:32.977885  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:33.070502  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:33.240815  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:33.476371  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:33.476485  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:33.567723  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:33.744967  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:33.976697  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:33.977190  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:34.067660  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:34.241708  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:34.476918  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:34.477448  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:34.567636  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:34.742715  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:34.975142  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:34.975677  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:35.067155  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:35.241663  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:35.474960  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:35.475368  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:35.566431  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:35.747921  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:35.977537  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:35.978174  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:36.066751  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:36.240974  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:36.480023  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:36.480673  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:36.566511  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:36.740422  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:36.976243  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:36.976634  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:37.067380  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:37.240341  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:37.476284  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:37.476445  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:37.566168  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:37.740659  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:37.975726  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:37.977103  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:38.072331  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:38.239796  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:38.475679  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:38.476094  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:38.567398  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:38.740111  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:38.974829  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:38.975269  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:39.066523  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:39.240138  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:39.476919  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:39.477394  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:39.566076  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:39.740469  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:39.975070  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:39.975186  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:40.066282  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:40.240700  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:40.475626  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:40.475755  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:40.566978  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:40.740953  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:40.974740  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:40.974932  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:41.066588  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:41.240067  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:41.475266  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:41.475415  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:41.567080  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:41.740482  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:41.974926  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:41.974926  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:42.066938  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:42.240183  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:42.475545  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:42.475776  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:42.573099  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:42.739598  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:42.974915  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:42.974974  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:43.066778  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:43.239916  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:43.476339  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:43.476766  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:43.566861  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:43.741069  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:43.975797  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:43.976020  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:44.067291  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:44.240747  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:44.474887  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:44.475062  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:44.567533  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:44.740416  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:44.975487  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:44.975776  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:45.068203  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:45.240803  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:45.476159  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:45.476610  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:45.566753  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:45.740071  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:45.975132  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:45.975275  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:46.066297  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:46.240669  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:46.476707  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:46.477206  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:46.566498  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:46.740110  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:46.975700  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:46.976423  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:47.066636  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:47.240474  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:47.474416  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:47.474556  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:47.566394  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:47.741320  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:47.974808  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:47.975833  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:48.067272  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:48.239626  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:48.485884  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:48.489903  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:48.571763  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:48.740687  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:48.975588  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:48.975986  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:49.066905  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:49.240433  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:49.477596  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:49.478708  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:49.566835  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:49.740258  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:49.975469  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:49.975650  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:50.066909  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:50.240628  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:50.474872  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:50.475042  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:50.567081  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:50.740674  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:50.974575  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:50.975889  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:51.067871  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:51.240837  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:51.474989  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:51.475355  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:51.566258  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:51.740422  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:51.975283  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:51.975496  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:52.067417  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:52.240079  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:52.474490  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:52.474787  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:52.575443  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:52.739637  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:52.974491  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:52.976628  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:53.066816  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:53.240089  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:53.475785  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:53.476144  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:53.566320  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:53.739799  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:53.976167  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:53.976583  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:54.076270  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:54.240561  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:54.474881  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:54.475183  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:54.566716  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:54.740284  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:54.975671  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:54.975914  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:55.066596  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:55.239541  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:55.477784  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:55.478067  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:55.566700  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:55.739909  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:55.975356  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:55.975779  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:56.066713  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:56.243320  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:56.476612  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:56.477126  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:56.566246  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:56.741116  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:56.976301  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:56.976698  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:57.067169  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:57.240561  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:57.474809  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:57.474884  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:57.566892  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:57.740269  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:57.975089  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:57.975274  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:58.066882  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:58.242967  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:58.475545  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:58.476905  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:58.567597  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:58.778081  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:58.975275  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:58.976422  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:59.066447  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:59.240258  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:59.476728  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:59.476987  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:26:59.566850  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:26:59.740975  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:26:59.975582  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:26:59.975768  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:00.077105  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:00.244417  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:00.475791  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:00.475914  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:00.566880  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:00.739986  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:00.975770  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:00.977302  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:01.066430  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:01.252209  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:01.476545  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:01.476855  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:01.567204  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:01.741111  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:01.975110  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:01.975319  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:02.075349  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:02.239759  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:02.474279  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:02.474955  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:02.566571  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:02.739818  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:02.975710  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:02.975747  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:03.076454  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:03.240423  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:03.477621  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:03.477962  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:03.567567  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:03.740736  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:03.976525  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:03.977036  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:04.067584  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:04.240413  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:04.476175  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:04.476444  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:04.565917  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:04.740226  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:04.981824  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:04.981962  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:05.078814  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:05.240293  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:05.478662  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:05.479796  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:05.566532  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:05.739932  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:05.983198  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1210 07:27:05.983898  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:06.067382  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:06.241099  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:06.476773  379535 kapi.go:107] duration metric: took 1m17.506150645s to wait for kubernetes.io/minikube-addons=registry ...
	I1210 07:27:06.477210  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:06.566714  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:06.741210  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:06.974945  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:07.068444  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:07.239869  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:07.477723  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:07.567248  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:07.740709  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:07.974010  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:08.067488  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:08.239456  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:08.474915  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:08.567277  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:08.739948  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:08.979211  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:09.080572  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:09.240962  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:09.475288  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:09.569585  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:09.746722  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:09.982336  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:10.066150  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:10.241031  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:10.475637  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:10.569166  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:10.742815  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:10.974230  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:11.066428  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:11.240053  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:11.475787  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:11.568145  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:11.741471  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:11.974359  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:12.066434  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:12.240634  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:12.476535  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:12.571813  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:12.740203  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:12.974389  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:13.067128  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:13.240786  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:13.488860  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:13.588661  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:13.739808  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:13.974085  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:14.068584  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:14.240428  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:14.474510  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:14.566547  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:14.740504  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:14.975611  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:15.066937  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:15.240718  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:15.474567  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:15.570343  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:15.740969  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:15.974831  379535 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1210 07:27:16.073489  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:16.240364  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:16.475164  379535 kapi.go:107] duration metric: took 1m27.504458298s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1210 07:27:16.567277  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:16.739743  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:17.066987  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:17.241846  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:17.574305  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:17.740955  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:18.067157  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:18.240755  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:18.567312  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:18.740214  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:19.067404  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1210 07:27:19.243981  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:19.567373  379535 kapi.go:107] duration metric: took 1m27.004170429s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1210 07:27:19.572599  379535 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-054300 cluster.
	I1210 07:27:19.576091  379535 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1210 07:27:19.579363  379535 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1210 07:27:19.740796  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:20.241297  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:20.739866  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:21.240831  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:21.744453  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:22.240395  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:22.740156  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:23.240187  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:23.740885  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:24.241232  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:24.744571  379535 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1210 07:27:25.239881  379535 kapi.go:107] duration metric: took 1m36.003467716s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1210 07:27:25.244673  379535 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, inspektor-gadget, registry-creds, storage-provisioner, cloud-spanner, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1210 07:27:25.247471  379535 addons.go:530] duration metric: took 1m42.496824303s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin inspektor-gadget registry-creds storage-provisioner cloud-spanner ingress-dns metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1210 07:27:25.247563  379535 start.go:247] waiting for cluster config update ...
	I1210 07:27:25.247651  379535 start.go:256] writing updated cluster config ...
	I1210 07:27:25.247982  379535 ssh_runner.go:195] Run: rm -f paused
	I1210 07:27:25.253264  379535 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:27:25.256976  379535 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4tklf" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:25.263102  379535 pod_ready.go:94] pod "coredns-66bc5c9577-4tklf" is "Ready"
	I1210 07:27:25.263140  379535 pod_ready.go:86] duration metric: took 6.134904ms for pod "coredns-66bc5c9577-4tklf" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:25.266343  379535 pod_ready.go:83] waiting for pod "etcd-addons-054300" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:25.270954  379535 pod_ready.go:94] pod "etcd-addons-054300" is "Ready"
	I1210 07:27:25.271005  379535 pod_ready.go:86] duration metric: took 4.632754ms for pod "etcd-addons-054300" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:25.273634  379535 pod_ready.go:83] waiting for pod "kube-apiserver-addons-054300" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:25.278431  379535 pod_ready.go:94] pod "kube-apiserver-addons-054300" is "Ready"
	I1210 07:27:25.278461  379535 pod_ready.go:86] duration metric: took 4.802847ms for pod "kube-apiserver-addons-054300" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:25.281119  379535 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-054300" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:25.657638  379535 pod_ready.go:94] pod "kube-controller-manager-addons-054300" is "Ready"
	I1210 07:27:25.657668  379535 pod_ready.go:86] duration metric: took 376.524925ms for pod "kube-controller-manager-addons-054300" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:25.858038  379535 pod_ready.go:83] waiting for pod "kube-proxy-lt4ld" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:26.257439  379535 pod_ready.go:94] pod "kube-proxy-lt4ld" is "Ready"
	I1210 07:27:26.257474  379535 pod_ready.go:86] duration metric: took 399.407812ms for pod "kube-proxy-lt4ld" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:26.457886  379535 pod_ready.go:83] waiting for pod "kube-scheduler-addons-054300" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:26.857737  379535 pod_ready.go:94] pod "kube-scheduler-addons-054300" is "Ready"
	I1210 07:27:26.857768  379535 pod_ready.go:86] duration metric: took 399.813879ms for pod "kube-scheduler-addons-054300" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 07:27:26.857781  379535 pod_ready.go:40] duration metric: took 1.604474091s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 07:27:26.917742  379535 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1210 07:27:26.921655  379535 out.go:179] * Done! kubectl is now configured to use "addons-054300" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 07:28:12 addons-054300 crio[826]: time="2025-12-10T07:28:12.892494722Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 07:28:12 addons-054300 crio[826]: time="2025-12-10T07:28:12.893008745Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 07:28:12 addons-054300 crio[826]: time="2025-12-10T07:28:12.911270982Z" level=info msg="Created container d94da1e38a2593a30bcf9e850bf4f69708ed9423c21a3676fa544b1fbb16cb5d: local-path-storage/helper-pod-delete-pvc-f752037c-3c31-451d-be8f-825295773e36/helper-pod" id=f44a42d8-ae87-475c-8d21-09874e978ca8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 07:28:12 addons-054300 crio[826]: time="2025-12-10T07:28:12.913909757Z" level=info msg="Starting container: d94da1e38a2593a30bcf9e850bf4f69708ed9423c21a3676fa544b1fbb16cb5d" id=348448dc-d954-4f8a-89e2-71ca04afb5e9 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 07:28:12 addons-054300 crio[826]: time="2025-12-10T07:28:12.918218857Z" level=info msg="Started container" PID=5586 containerID=d94da1e38a2593a30bcf9e850bf4f69708ed9423c21a3676fa544b1fbb16cb5d description=local-path-storage/helper-pod-delete-pvc-f752037c-3c31-451d-be8f-825295773e36/helper-pod id=348448dc-d954-4f8a-89e2-71ca04afb5e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7e8ede72c8bca4de5d57ee95198c1c01f8d623fbb75094bd757a64ba40b82864
	Dec 10 07:28:14 addons-054300 crio[826]: time="2025-12-10T07:28:14.347348785Z" level=info msg="Stopping pod sandbox: 7e8ede72c8bca4de5d57ee95198c1c01f8d623fbb75094bd757a64ba40b82864" id=8f9fa16d-a41a-43b7-8a41-722c7b2e58f9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 10 07:28:14 addons-054300 crio[826]: time="2025-12-10T07:28:14.347612156Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-f752037c-3c31-451d-be8f-825295773e36 Namespace:local-path-storage ID:7e8ede72c8bca4de5d57ee95198c1c01f8d623fbb75094bd757a64ba40b82864 UID:05f35efe-76f9-4f35-8d9a-342228792bb0 NetNS:/var/run/netns/5fad2161-9009-4c2f-9386-054bcd063510 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d148}] Aliases:map[]}"
	Dec 10 07:28:14 addons-054300 crio[826]: time="2025-12-10T07:28:14.347747197Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-f752037c-3c31-451d-be8f-825295773e36 from CNI network \"kindnet\" (type=ptp)"
	Dec 10 07:28:14 addons-054300 crio[826]: time="2025-12-10T07:28:14.373133088Z" level=info msg="Stopped pod sandbox: 7e8ede72c8bca4de5d57ee95198c1c01f8d623fbb75094bd757a64ba40b82864" id=8f9fa16d-a41a-43b7-8a41-722c7b2e58f9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 10 07:28:19 addons-054300 crio[826]: time="2025-12-10T07:28:19.341852005Z" level=info msg="Running pod sandbox: default/task-pv-pod-restore/POD" id=3a76c82d-5efa-4b6a-816d-8470107ac17c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 07:28:19 addons-054300 crio[826]: time="2025-12-10T07:28:19.341924711Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 07:28:19 addons-054300 crio[826]: time="2025-12-10T07:28:19.355637994Z" level=info msg="Got pod network &{Name:task-pv-pod-restore Namespace:default ID:856ff7081485d91aa029b81aa3bca8a90be49c5f89c08c1f07a92bfb702fe19a UID:7ce802d5-d7bc-4fab-b4a0-629c94cf5fe8 NetNS:/var/run/netns/e9364cb1-92b3-4a9f-a2f3-56b3a2786b19 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002b6e3c8}] Aliases:map[]}"
	Dec 10 07:28:19 addons-054300 crio[826]: time="2025-12-10T07:28:19.355678257Z" level=info msg="Adding pod default_task-pv-pod-restore to CNI network \"kindnet\" (type=ptp)"
	Dec 10 07:28:19 addons-054300 crio[826]: time="2025-12-10T07:28:19.377391296Z" level=info msg="Got pod network &{Name:task-pv-pod-restore Namespace:default ID:856ff7081485d91aa029b81aa3bca8a90be49c5f89c08c1f07a92bfb702fe19a UID:7ce802d5-d7bc-4fab-b4a0-629c94cf5fe8 NetNS:/var/run/netns/e9364cb1-92b3-4a9f-a2f3-56b3a2786b19 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002b6e3c8}] Aliases:map[]}"
	Dec 10 07:28:19 addons-054300 crio[826]: time="2025-12-10T07:28:19.37840893Z" level=info msg="Checking pod default_task-pv-pod-restore for CNI network kindnet (type=ptp)"
	Dec 10 07:28:19 addons-054300 crio[826]: time="2025-12-10T07:28:19.390681929Z" level=info msg="Ran pod sandbox 856ff7081485d91aa029b81aa3bca8a90be49c5f89c08c1f07a92bfb702fe19a with infra container: default/task-pv-pod-restore/POD" id=3a76c82d-5efa-4b6a-816d-8470107ac17c name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 10 07:28:19 addons-054300 crio[826]: time="2025-12-10T07:28:19.401209896Z" level=info msg="Checking image status: public.ecr.aws/nginx/nginx:alpine" id=7d1b36c2-d1ad-4dab-8ce1-d416fa5cf830 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:28:19 addons-054300 crio[826]: time="2025-12-10T07:28:19.407249866Z" level=info msg="Checking image status: public.ecr.aws/nginx/nginx:alpine" id=a9a1f61a-766d-4a77-848c-51853f8731f6 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:28:19 addons-054300 crio[826]: time="2025-12-10T07:28:19.415137433Z" level=info msg="Creating container: default/task-pv-pod-restore/task-pv-container" id=2a6f88e3-e0c2-4ff8-aae6-37b0f457590b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 07:28:19 addons-054300 crio[826]: time="2025-12-10T07:28:19.415264565Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 07:28:19 addons-054300 crio[826]: time="2025-12-10T07:28:19.424018855Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 07:28:19 addons-054300 crio[826]: time="2025-12-10T07:28:19.424611377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 07:28:19 addons-054300 crio[826]: time="2025-12-10T07:28:19.450305054Z" level=info msg="Created container 62e77cee08772fd45d23fb2567bdbfffacc5b978de2c93a809414955c71ea87b: default/task-pv-pod-restore/task-pv-container" id=2a6f88e3-e0c2-4ff8-aae6-37b0f457590b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 07:28:19 addons-054300 crio[826]: time="2025-12-10T07:28:19.45140044Z" level=info msg="Starting container: 62e77cee08772fd45d23fb2567bdbfffacc5b978de2c93a809414955c71ea87b" id=c892d072-7bd3-4be4-ab2e-39a43a77d7a1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 07:28:19 addons-054300 crio[826]: time="2025-12-10T07:28:19.458739727Z" level=info msg="Started container" PID=5749 containerID=62e77cee08772fd45d23fb2567bdbfffacc5b978de2c93a809414955c71ea87b description=default/task-pv-pod-restore/task-pv-container id=c892d072-7bd3-4be4-ab2e-39a43a77d7a1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=856ff7081485d91aa029b81aa3bca8a90be49c5f89c08c1f07a92bfb702fe19a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	62e77cee08772       cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1                                                                             Less than a second ago   Running             task-pv-container                        0                   856ff7081485d       task-pv-pod-restore                                          default
	d94da1e38a259       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             6 seconds ago            Exited              helper-pod                               0                   7e8ede72c8bca       helper-pod-delete-pvc-f752037c-3c31-451d-be8f-825295773e36   local-path-storage
	bd8aabe4a1552       docker.io/library/busybox@sha256:079b4a73854a059a2073c6e1a031b17fcbf23a47c6c59ae760d78045199e403c                                            10 seconds ago           Exited              busybox                                  0                   37ab0a6c27bd9       test-local-path                                              default
	afb717f95ae28       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            13 seconds ago           Exited              helper-pod                               0                   e710d03df9d9d       helper-pod-create-pvc-f752037c-3c31-451d-be8f-825295773e36   local-path-storage
	5179e9b41e4bd       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          49 seconds ago           Running             busybox                                  0                   ae5bd9c38d50c       busybox                                                      default
	93c6c5614c0a9       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          54 seconds ago           Running             csi-snapshotter                          0                   df048affbebe5       csi-hostpathplugin-bmkhb                                     kube-system
	2f1be47dacec8       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          56 seconds ago           Running             csi-provisioner                          0                   df048affbebe5       csi-hostpathplugin-bmkhb                                     kube-system
	c796a7524dca5       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            57 seconds ago           Running             liveness-probe                           0                   df048affbebe5       csi-hostpathplugin-bmkhb                                     kube-system
	31f1c3a5096a8       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           58 seconds ago           Running             hostpath                                 0                   df048affbebe5       csi-hostpathplugin-bmkhb                                     kube-system
	7eebbffe34f04       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                59 seconds ago           Running             node-driver-registrar                    0                   df048affbebe5       csi-hostpathplugin-bmkhb                                     kube-system
	f3fe35bc7c9db       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 About a minute ago       Running             gcp-auth                                 0                   3be6747a9fc1d       gcp-auth-78565c9fb4-ws495                                    gcp-auth
	cc2386e1e7501       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             About a minute ago       Running             controller                               0                   8bacb1dc63a85       ingress-nginx-controller-85d4c799dd-htr8f                    ingress-nginx
	ae6cef0f10ca5       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            About a minute ago       Running             gadget                                   0                   b036a941462e5       gadget-rhzvh                                                 gadget
	7513d10c4f49b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago       Running             registry-proxy                           0                   506ef510f45b5       registry-proxy-x77gq                                         kube-system
	f29e1be413b72       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             About a minute ago       Exited              patch                                    2                   6b588bbf05bc4       gcp-auth-certs-patch-6244c                                   gcp-auth
	5668f24e462ca       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago       Running             nvidia-device-plugin-ctr                 0                   0a1fd72a315c4       nvidia-device-plugin-daemonset-jgw4d                         kube-system
	0b8ebb10e62eb       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             About a minute ago       Exited              patch                                    2                   03ca060766dcd       ingress-nginx-admission-patch-tlj69                          ingress-nginx
	03d15632d2655       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago       Running             csi-external-health-monitor-controller   0                   df048affbebe5       csi-hostpathplugin-bmkhb                                     kube-system
	066dde63f910d       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago       Running             csi-attacher                             0                   b8c927998cde5       csi-hostpath-attacher-0                                      kube-system
	607af77f9ee4c       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago       Running             csi-resizer                              0                   f74deb4fd23cd       csi-hostpath-resizer-0                                       kube-system
	de2062e9f92ec       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago       Exited              create                                   0                   7a03a57b2de9b       ingress-nginx-admission-create-5dvp6                         ingress-nginx
	119a600e8cd24       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago       Running             yakd                                     0                   e8eed067ab62c       yakd-dashboard-5ff678cb9-r7798                               yakd-dashboard
	b99ece48f039a       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago       Running             volume-snapshot-controller               0                   1ebf01df32e18       snapshot-controller-7d9fbc56b8-9c9b8                         kube-system
	44b34e0a56c70       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago       Running             volume-snapshot-controller               0                   6d3667317c1ba       snapshot-controller-7d9fbc56b8-p5w2h                         kube-system
	2ba3f318a735a       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               About a minute ago       Running             cloud-spanner-emulator                   0                   94d97ee14a0aa       cloud-spanner-emulator-5bdddb765-59bbv                       default
	a21d19bf0c7d1       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago       Running             local-path-provisioner                   0                   9fea308758c43       local-path-provisioner-648f6765c9-vwdrf                      local-path-storage
	b51e7b373a333       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago       Running             registry                                 0                   19e12fc266ce4       registry-6b586f9694-rgr2q                                    kube-system
	4c160e32d403d       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago       Running             metrics-server                           0                   58f37a8226973       metrics-server-85b7d694d7-pcvgr                              kube-system
	ea57c044de907       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago       Running             minikube-ingress-dns                     0                   54dfc2e7a3622       kube-ingress-dns-minikube                                    kube-system
	010ebc9ab887d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago       Running             coredns                                  0                   38cf39d2254a6       coredns-66bc5c9577-4tklf                                     kube-system
	bf6af03dc7508       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago       Running             storage-provisioner                      0                   86004d049827d       storage-provisioner                                          kube-system
	cd4a11fe27652       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago            Running             kindnet-cni                              0                   a02f0c796fcb7       kindnet-b47q8                                                kube-system
	423282b955e32       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             2 minutes ago            Running             kube-proxy                               0                   e9ba591950e5e       kube-proxy-lt4ld                                             kube-system
	6f7abeab2dc46       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             2 minutes ago            Running             kube-controller-manager                  0                   e57f1308f317f       kube-controller-manager-addons-054300                        kube-system
	7e676b17ce03a       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             2 minutes ago            Running             etcd                                     0                   c1056718c8e52       etcd-addons-054300                                           kube-system
	f47e61eab5569       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             2 minutes ago            Running             kube-apiserver                           0                   c0e60f07997fd       kube-apiserver-addons-054300                                 kube-system
	6f9a84d527a09       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             2 minutes ago            Running             kube-scheduler                           0                   b618244dd39f3       kube-scheduler-addons-054300                                 kube-system
	
	
	==> coredns [010ebc9ab887dac18a762a98373849d39c683302618e70cd88a2ca7e38a5535f] <==
	[INFO] 10.244.0.17:38351 - 45103 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002102081s
	[INFO] 10.244.0.17:38351 - 59217 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000238714s
	[INFO] 10.244.0.17:38351 - 15747 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000149663s
	[INFO] 10.244.0.17:40308 - 19172 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000165441s
	[INFO] 10.244.0.17:40308 - 18942 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000207198s
	[INFO] 10.244.0.17:46740 - 46626 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000141548s
	[INFO] 10.244.0.17:46740 - 47087 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00015891s
	[INFO] 10.244.0.17:53160 - 56143 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000133138s
	[INFO] 10.244.0.17:53160 - 55963 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000146553s
	[INFO] 10.244.0.17:35751 - 63766 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001679916s
	[INFO] 10.244.0.17:35751 - 63569 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001837308s
	[INFO] 10.244.0.17:35452 - 48149 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000142155s
	[INFO] 10.244.0.17:35452 - 47960 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000090553s
	[INFO] 10.244.0.21:57352 - 21832 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000181524s
	[INFO] 10.244.0.21:60969 - 44327 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000345619s
	[INFO] 10.244.0.21:47451 - 15684 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000110433s
	[INFO] 10.244.0.21:39403 - 32649 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000243973s
	[INFO] 10.244.0.21:54234 - 8527 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000140719s
	[INFO] 10.244.0.21:39063 - 38368 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109293s
	[INFO] 10.244.0.21:59778 - 15964 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00246672s
	[INFO] 10.244.0.21:57912 - 14364 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002261418s
	[INFO] 10.244.0.21:53530 - 36713 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001420263s
	[INFO] 10.244.0.21:51043 - 32048 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003380804s
	[INFO] 10.244.0.23:35963 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000144305s
	[INFO] 10.244.0.23:38505 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000105987s
	
	
	==> describe nodes <==
	Name:               addons-054300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-054300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=addons-054300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T07_25_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-054300
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-054300"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 07:25:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-054300
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 07:28:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 07:28:11 +0000   Wed, 10 Dec 2025 07:25:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 07:28:11 +0000   Wed, 10 Dec 2025 07:25:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 07:28:11 +0000   Wed, 10 Dec 2025 07:25:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 07:28:11 +0000   Wed, 10 Dec 2025 07:26:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-054300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 0bfdf75342fda7ce4dcc05536938a4f8
	  System UUID:                9c598586-ae7f-4553-b778-30f36bc21e4b
	  Boot ID:                    9ae06026-ffc7-4eb4-912b-d54adcad0f66
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  default                     cloud-spanner-emulator-5bdddb765-59bbv       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	  default                     task-pv-pod-restore                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  gadget                      gadget-rhzvh                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  gcp-auth                    gcp-auth-78565c9fb4-ws495                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-htr8f    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m32s
	  kube-system                 coredns-66bc5c9577-4tklf                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m38s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 csi-hostpathplugin-bmkhb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 etcd-addons-054300                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m43s
	  kube-system                 kindnet-b47q8                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m38s
	  kube-system                 kube-apiserver-addons-054300                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m43s
	  kube-system                 kube-controller-manager-addons-054300        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m43s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-proxy-lt4ld                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 kube-scheduler-addons-054300                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m43s
	  kube-system                 metrics-server-85b7d694d7-pcvgr              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m33s
	  kube-system                 nvidia-device-plugin-daemonset-jgw4d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 registry-6b586f9694-rgr2q                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 registry-creds-764b6fb674-7pk58              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 registry-proxy-x77gq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 snapshot-controller-7d9fbc56b8-9c9b8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 snapshot-controller-7d9fbc56b8-p5w2h         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	  local-path-storage          local-path-provisioner-648f6765c9-vwdrf      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-r7798               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m36s  kube-proxy       
	  Normal   Starting                 2m43s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m43s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m43s  kubelet          Node addons-054300 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m43s  kubelet          Node addons-054300 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m43s  kubelet          Node addons-054300 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m39s  node-controller  Node addons-054300 event: Registered Node addons-054300 in Controller
	  Normal   NodeReady                116s   kubelet          Node addons-054300 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	[Dec10 07:11] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:23] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:25] overlayfs: idmapped layers are currently not supported
	[  +0.065949] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [7e676b17ce03a2d6e29e4632f9bf789ed23b923d4e1656d76c8dc18d9a19f765] <==
	{"level":"warn","ts":"2025-12-10T07:25:32.866955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:32.880694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:32.897992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:32.923591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:32.961811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:32.989738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.029809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.056867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.089866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.119478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.136671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.181146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.206582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.229221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.267446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.313920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.340285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.365093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:33.524890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:49.607184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:25:49.619106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:26:11.501987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:26:11.512040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:26:11.540468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T07:26:11.555215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51118","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [f3fe35bc7c9db5bb24f28d54b1bbac54bd9481d9d194946197ad5c03255b6594] <==
	2025/12/10 07:27:18 GCP Auth Webhook started!
	2025/12/10 07:27:27 Ready to marshal response ...
	2025/12/10 07:27:27 Ready to write response ...
	2025/12/10 07:27:27 Ready to marshal response ...
	2025/12/10 07:27:27 Ready to write response ...
	2025/12/10 07:27:27 Ready to marshal response ...
	2025/12/10 07:27:27 Ready to write response ...
	2025/12/10 07:27:46 Ready to marshal response ...
	2025/12/10 07:27:46 Ready to write response ...
	2025/12/10 07:27:55 Ready to marshal response ...
	2025/12/10 07:27:55 Ready to write response ...
	2025/12/10 07:28:04 Ready to marshal response ...
	2025/12/10 07:28:04 Ready to write response ...
	2025/12/10 07:28:04 Ready to marshal response ...
	2025/12/10 07:28:04 Ready to write response ...
	2025/12/10 07:28:12 Ready to marshal response ...
	2025/12/10 07:28:12 Ready to write response ...
	2025/12/10 07:28:19 Ready to marshal response ...
	2025/12/10 07:28:19 Ready to write response ...
	
	
	==> kernel <==
	 07:28:20 up  2:10,  0 user,  load average: 2.94, 2.47, 1.96
	Linux addons-054300 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cd4a11fe2765202ff0a3776d921d19a73423afa2a34ebc3326d6039c5e6a30d7] <==
	I1210 07:26:15.319100       1 controller.go:711] "Syncing nftables rules"
	I1210 07:26:23.721631       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:26:23.721689       1 main.go:301] handling current node
	I1210 07:26:33.725225       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:26:33.725262       1 main.go:301] handling current node
	I1210 07:26:43.717211       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:26:43.717258       1 main.go:301] handling current node
	I1210 07:26:53.718201       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:26:53.718230       1 main.go:301] handling current node
	I1210 07:27:03.718143       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:27:03.718207       1 main.go:301] handling current node
	I1210 07:27:13.717748       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:27:13.717847       1 main.go:301] handling current node
	I1210 07:27:23.718260       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:27:23.718337       1 main.go:301] handling current node
	I1210 07:27:33.720465       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:27:33.720549       1 main.go:301] handling current node
	I1210 07:27:43.719619       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:27:43.719656       1 main.go:301] handling current node
	I1210 07:27:53.719063       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:27:53.719182       1 main.go:301] handling current node
	I1210 07:28:03.720213       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:28:03.720261       1 main.go:301] handling current node
	I1210 07:28:13.718128       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 07:28:13.718259       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f47e61eab5569fd31608bd377428e8097fb04caa8d1759ffcd5dafe6a7257e83] <==
	I1210 07:25:49.185147       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.107.109.53"}
	W1210 07:25:49.603214       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1210 07:25:49.618355       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1210 07:25:52.429329       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.99.154.198"}
	W1210 07:26:11.495930       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1210 07:26:11.512066       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1210 07:26:11.539829       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1210 07:26:11.555278       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1210 07:26:24.351174       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.154.198:443: connect: connection refused
	E1210 07:26:24.351291       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.154.198:443: connect: connection refused" logger="UnhandledError"
	W1210 07:26:24.351872       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.154.198:443: connect: connection refused
	E1210 07:26:24.351915       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.154.198:443: connect: connection refused" logger="UnhandledError"
	W1210 07:26:24.495647       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.154.198:443: connect: connection refused
	E1210 07:26:24.495778       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.154.198:443: connect: connection refused" logger="UnhandledError"
	E1210 07:26:37.807314       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.255.146:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.255.146:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.255.146:443: connect: connection refused" logger="UnhandledError"
	W1210 07:26:37.807864       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 07:26:37.809124       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1210 07:26:37.845069       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1210 07:26:37.861399       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1210 07:27:35.976399       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34096: use of closed network connection
	I1210 07:28:04.166379       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1210 07:28:06.314681       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	
	
	==> kube-controller-manager [6f7abeab2dc46acc945548fb9227342447b975273c96c12c95c3030a8fe43009] <==
	I1210 07:25:41.524975       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 07:25:41.525012       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 07:25:41.525017       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1210 07:25:41.525346       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1210 07:25:41.525397       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 07:25:41.525856       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1210 07:25:41.528463       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 07:25:41.528650       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1210 07:25:41.531097       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1210 07:25:41.531107       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 07:25:41.531122       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 07:25:41.532682       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 07:25:41.543116       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 07:25:41.543143       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 07:25:41.543150       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1210 07:25:47.776407       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1210 07:25:47.794157       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1210 07:26:11.488441       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 07:26:11.488641       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1210 07:26:11.488690       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1210 07:26:11.521455       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1210 07:26:11.527538       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1210 07:26:11.589579       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1210 07:26:11.628162       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 07:26:26.489695       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [423282b955e322a79f61f0e6396aaf4856be9146d169ba0017ff6c9344317365] <==
	I1210 07:25:43.667383       1 server_linux.go:53] "Using iptables proxy"
	I1210 07:25:43.760735       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 07:25:43.869049       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 07:25:43.869088       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1210 07:25:43.869169       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 07:25:43.903818       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 07:25:43.903872       1 server_linux.go:132] "Using iptables Proxier"
	I1210 07:25:43.917270       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 07:25:43.921159       1 server.go:527] "Version info" version="v1.34.2"
	I1210 07:25:43.921195       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 07:25:43.925351       1 config.go:200] "Starting service config controller"
	I1210 07:25:43.925366       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 07:25:43.925382       1 config.go:106] "Starting endpoint slice config controller"
	I1210 07:25:43.925386       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 07:25:43.925397       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 07:25:43.925401       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 07:25:43.925986       1 config.go:309] "Starting node config controller"
	I1210 07:25:43.925993       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 07:25:43.926000       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 07:25:44.025893       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 07:25:44.025933       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 07:25:44.025978       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6f9a84d527a09dd38aac5cf60b6359dfddc676ea260108b8f4f852340f51772f] <==
	E1210 07:25:34.614859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 07:25:34.618611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 07:25:34.623341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1210 07:25:34.623646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 07:25:34.623697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 07:25:34.623743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 07:25:34.623788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 07:25:34.623826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 07:25:34.623857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 07:25:34.623977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 07:25:34.624013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 07:25:34.624048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 07:25:35.424263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 07:25:35.434462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 07:25:35.455846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 07:25:35.458298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 07:25:35.475080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 07:25:35.508974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1210 07:25:35.521913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 07:25:35.645424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 07:25:35.656772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 07:25:35.758413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 07:25:35.823373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 07:25:36.140962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1210 07:25:37.903077       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 07:28:12 addons-054300 kubelet[1267]: I1210 07:28:12.360648    1267 status_manager.go:1073] "Failed to delete status for pod" pod="default/test-local-path" err="pods \"test-local-path\" not found"
	Dec 10 07:28:12 addons-054300 kubelet[1267]: I1210 07:28:12.580137    1267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/05f35efe-76f9-4f35-8d9a-342228792bb0-data\") pod \"helper-pod-delete-pvc-f752037c-3c31-451d-be8f-825295773e36\" (UID: \"05f35efe-76f9-4f35-8d9a-342228792bb0\") " pod="local-path-storage/helper-pod-delete-pvc-f752037c-3c31-451d-be8f-825295773e36"
	Dec 10 07:28:12 addons-054300 kubelet[1267]: I1210 07:28:12.580720    1267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcsbh\" (UniqueName: \"kubernetes.io/projected/05f35efe-76f9-4f35-8d9a-342228792bb0-kube-api-access-pcsbh\") pod \"helper-pod-delete-pvc-f752037c-3c31-451d-be8f-825295773e36\" (UID: \"05f35efe-76f9-4f35-8d9a-342228792bb0\") " pod="local-path-storage/helper-pod-delete-pvc-f752037c-3c31-451d-be8f-825295773e36"
	Dec 10 07:28:12 addons-054300 kubelet[1267]: I1210 07:28:12.580820    1267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/05f35efe-76f9-4f35-8d9a-342228792bb0-gcp-creds\") pod \"helper-pod-delete-pvc-f752037c-3c31-451d-be8f-825295773e36\" (UID: \"05f35efe-76f9-4f35-8d9a-342228792bb0\") " pod="local-path-storage/helper-pod-delete-pvc-f752037c-3c31-451d-be8f-825295773e36"
	Dec 10 07:28:12 addons-054300 kubelet[1267]: I1210 07:28:12.580940    1267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/05f35efe-76f9-4f35-8d9a-342228792bb0-script\") pod \"helper-pod-delete-pvc-f752037c-3c31-451d-be8f-825295773e36\" (UID: \"05f35efe-76f9-4f35-8d9a-342228792bb0\") " pod="local-path-storage/helper-pod-delete-pvc-f752037c-3c31-451d-be8f-825295773e36"
	Dec 10 07:28:13 addons-054300 kubelet[1267]: I1210 07:28:13.364269    1267 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9c9ca5a-073e-41c9-9b7b-b20fc8b28c1e" path="/var/lib/kubelet/pods/c9c9ca5a-073e-41c9-9b7b-b20fc8b28c1e/volumes"
	Dec 10 07:28:14 addons-054300 kubelet[1267]: I1210 07:28:14.496838    1267 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/05f35efe-76f9-4f35-8d9a-342228792bb0-script\") pod \"05f35efe-76f9-4f35-8d9a-342228792bb0\" (UID: \"05f35efe-76f9-4f35-8d9a-342228792bb0\") "
	Dec 10 07:28:14 addons-054300 kubelet[1267]: I1210 07:28:14.496947    1267 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/05f35efe-76f9-4f35-8d9a-342228792bb0-gcp-creds\") pod \"05f35efe-76f9-4f35-8d9a-342228792bb0\" (UID: \"05f35efe-76f9-4f35-8d9a-342228792bb0\") "
	Dec 10 07:28:14 addons-054300 kubelet[1267]: I1210 07:28:14.496975    1267 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/05f35efe-76f9-4f35-8d9a-342228792bb0-data\") pod \"05f35efe-76f9-4f35-8d9a-342228792bb0\" (UID: \"05f35efe-76f9-4f35-8d9a-342228792bb0\") "
	Dec 10 07:28:14 addons-054300 kubelet[1267]: I1210 07:28:14.497002    1267 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcsbh\" (UniqueName: \"kubernetes.io/projected/05f35efe-76f9-4f35-8d9a-342228792bb0-kube-api-access-pcsbh\") pod \"05f35efe-76f9-4f35-8d9a-342228792bb0\" (UID: \"05f35efe-76f9-4f35-8d9a-342228792bb0\") "
	Dec 10 07:28:14 addons-054300 kubelet[1267]: I1210 07:28:14.497673    1267 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05f35efe-76f9-4f35-8d9a-342228792bb0-script" (OuterVolumeSpecName: "script") pod "05f35efe-76f9-4f35-8d9a-342228792bb0" (UID: "05f35efe-76f9-4f35-8d9a-342228792bb0"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Dec 10 07:28:14 addons-054300 kubelet[1267]: I1210 07:28:14.497853    1267 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05f35efe-76f9-4f35-8d9a-342228792bb0-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "05f35efe-76f9-4f35-8d9a-342228792bb0" (UID: "05f35efe-76f9-4f35-8d9a-342228792bb0"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 10 07:28:14 addons-054300 kubelet[1267]: I1210 07:28:14.497948    1267 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05f35efe-76f9-4f35-8d9a-342228792bb0-data" (OuterVolumeSpecName: "data") pod "05f35efe-76f9-4f35-8d9a-342228792bb0" (UID: "05f35efe-76f9-4f35-8d9a-342228792bb0"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 10 07:28:14 addons-054300 kubelet[1267]: I1210 07:28:14.499320    1267 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05f35efe-76f9-4f35-8d9a-342228792bb0-kube-api-access-pcsbh" (OuterVolumeSpecName: "kube-api-access-pcsbh") pod "05f35efe-76f9-4f35-8d9a-342228792bb0" (UID: "05f35efe-76f9-4f35-8d9a-342228792bb0"). InnerVolumeSpecName "kube-api-access-pcsbh". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 10 07:28:14 addons-054300 kubelet[1267]: I1210 07:28:14.597686    1267 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/05f35efe-76f9-4f35-8d9a-342228792bb0-script\") on node \"addons-054300\" DevicePath \"\""
	Dec 10 07:28:14 addons-054300 kubelet[1267]: I1210 07:28:14.597726    1267 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/05f35efe-76f9-4f35-8d9a-342228792bb0-gcp-creds\") on node \"addons-054300\" DevicePath \"\""
	Dec 10 07:28:14 addons-054300 kubelet[1267]: I1210 07:28:14.597740    1267 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/05f35efe-76f9-4f35-8d9a-342228792bb0-data\") on node \"addons-054300\" DevicePath \"\""
	Dec 10 07:28:14 addons-054300 kubelet[1267]: I1210 07:28:14.597751    1267 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pcsbh\" (UniqueName: \"kubernetes.io/projected/05f35efe-76f9-4f35-8d9a-342228792bb0-kube-api-access-pcsbh\") on node \"addons-054300\" DevicePath \"\""
	Dec 10 07:28:15 addons-054300 kubelet[1267]: I1210 07:28:15.351770    1267 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e8ede72c8bca4de5d57ee95198c1c01f8d623fbb75094bd757a64ba40b82864"
	Dec 10 07:28:15 addons-054300 kubelet[1267]: E1210 07:28:15.353528    1267 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-f752037c-3c31-451d-be8f-825295773e36\" is forbidden: User \"system:node:addons-054300\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-054300' and this object" podUID="05f35efe-76f9-4f35-8d9a-342228792bb0" pod="local-path-storage/helper-pod-delete-pvc-f752037c-3c31-451d-be8f-825295773e36"
	Dec 10 07:28:15 addons-054300 kubelet[1267]: I1210 07:28:15.364623    1267 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05f35efe-76f9-4f35-8d9a-342228792bb0" path="/var/lib/kubelet/pods/05f35efe-76f9-4f35-8d9a-342228792bb0/volumes"
	Dec 10 07:28:19 addons-054300 kubelet[1267]: I1210 07:28:19.145106    1267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w69ck\" (UniqueName: \"kubernetes.io/projected/7ce802d5-d7bc-4fab-b4a0-629c94cf5fe8-kube-api-access-w69ck\") pod \"task-pv-pod-restore\" (UID: \"7ce802d5-d7bc-4fab-b4a0-629c94cf5fe8\") " pod="default/task-pv-pod-restore"
	Dec 10 07:28:19 addons-054300 kubelet[1267]: I1210 07:28:19.145174    1267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7ce802d5-d7bc-4fab-b4a0-629c94cf5fe8-gcp-creds\") pod \"task-pv-pod-restore\" (UID: \"7ce802d5-d7bc-4fab-b4a0-629c94cf5fe8\") " pod="default/task-pv-pod-restore"
	Dec 10 07:28:19 addons-054300 kubelet[1267]: I1210 07:28:19.145236    1267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-28a67d34-ac8f-438f-8dcb-950fe23b6528\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^cc008a57-d599-11f0-b363-9e85995144ba\") pod \"task-pv-pod-restore\" (UID: \"7ce802d5-d7bc-4fab-b4a0-629c94cf5fe8\") " pod="default/task-pv-pod-restore"
	Dec 10 07:28:19 addons-054300 kubelet[1267]: I1210 07:28:19.256627    1267 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-28a67d34-ac8f-438f-8dcb-950fe23b6528\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^cc008a57-d599-11f0-b363-9e85995144ba\") pod \"task-pv-pod-restore\" (UID: \"7ce802d5-d7bc-4fab-b4a0-629c94cf5fe8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/6df976e9de149079c3400a390414859f78c2300c099e5a3e444b43b3ebf1be00/globalmount\"" pod="default/task-pv-pod-restore"
	
	
	==> storage-provisioner [bf6af03dc750859dc6c94c9e6d82d48b4876eee71ec2028f9a291deb7578562c] <==
	W1210 07:27:55.765635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:27:57.770307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:27:57.784912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:27:59.788427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:27:59.793379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:28:01.797300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:28:01.804262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:28:03.806958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:28:03.811417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:28:05.815898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:28:05.828051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:28:07.843300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:28:07.890325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:28:09.893649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:28:09.902564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:28:11.906559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:28:11.912533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:28:13.915997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:28:13.923176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:28:15.926166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:28:15.930649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:28:17.938952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:28:17.947820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:28:19.950534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1210 07:28:19.957864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-054300 -n addons-054300
helpers_test.go:270: (dbg) Run:  kubectl --context addons-054300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-5dvp6 ingress-nginx-admission-patch-tlj69 registry-creds-764b6fb674-7pk58
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-054300 describe pod ingress-nginx-admission-create-5dvp6 ingress-nginx-admission-patch-tlj69 registry-creds-764b6fb674-7pk58
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-054300 describe pod ingress-nginx-admission-create-5dvp6 ingress-nginx-admission-patch-tlj69 registry-creds-764b6fb674-7pk58: exit status 1 (81.249099ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-5dvp6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tlj69" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-7pk58" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-054300 describe pod ingress-nginx-admission-create-5dvp6 ingress-nginx-admission-patch-tlj69 registry-creds-764b6fb674-7pk58: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054300 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054300 addons disable headlamp --alsologtostderr -v=1: exit status 11 (259.418586ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:28:21.355730  387266 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:28:21.356677  387266 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:28:21.356723  387266 out.go:374] Setting ErrFile to fd 2...
	I1210 07:28:21.356743  387266 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:28:21.357567  387266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:28:21.357943  387266 mustload.go:66] Loading cluster: addons-054300
	I1210 07:28:21.358333  387266 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:28:21.358354  387266 addons.go:622] checking whether the cluster is paused
	I1210 07:28:21.358465  387266 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:28:21.358481  387266 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:28:21.358994  387266 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:28:21.379143  387266 ssh_runner.go:195] Run: systemctl --version
	I1210 07:28:21.379201  387266 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:28:21.410773  387266 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:28:21.505722  387266 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:28:21.505803  387266 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:28:21.535381  387266 cri.go:89] found id: "93c6c5614c0a9a2eb74cabfde767b532e188ff8cd89a94813838268e5804151b"
	I1210 07:28:21.535411  387266 cri.go:89] found id: "2f1be47dacec87c0a56b3cd1eb73c6d247005830f78472193131c56d054bcc4a"
	I1210 07:28:21.535417  387266 cri.go:89] found id: "c796a7524dca5dedee39b3961c5997d3d72f431343a00508663733fc1be6878c"
	I1210 07:28:21.535421  387266 cri.go:89] found id: "31f1c3a5096a8785d7b60bd481c9f3c1cf0709868ce6cb1cc139db75f52c1f42"
	I1210 07:28:21.535425  387266 cri.go:89] found id: "7eebbffe34f04d7ad5bdc3e5c3aef890263f8a90ca947aaeeca6359c36cc1191"
	I1210 07:28:21.535429  387266 cri.go:89] found id: "7513d10c4f49be5d24f91c1199a5db71465badd2bf9a2a1a84bcf9829a3c814e"
	I1210 07:28:21.535432  387266 cri.go:89] found id: "5668f24e462ca639e29b3d82f2cc864344a08ca26a729049642a2924016b13e4"
	I1210 07:28:21.535436  387266 cri.go:89] found id: "03d15632d265591c79e3367a3210fd42b6febf2d5d06f75bf11cf5ba909f8df7"
	I1210 07:28:21.535439  387266 cri.go:89] found id: "066dde63f910d4f04f548debd89f326b5a438ce283264953c1a7666cbeaf01dd"
	I1210 07:28:21.535446  387266 cri.go:89] found id: "607af77f9ee4c14a40adf7d8a6d9170dbdb3216cdeee2e67303dfa6b22ceb2ab"
	I1210 07:28:21.535449  387266 cri.go:89] found id: "b99ece48f039a1adf74d1e66696d0dfe4716da14982d4112abccc91ba77922e1"
	I1210 07:28:21.535452  387266 cri.go:89] found id: "44b34e0a56c7013a6bda4740b0b76cc47606a7950d05f25fc59201a8c1464857"
	I1210 07:28:21.535456  387266 cri.go:89] found id: "b51e7b373a333c5307dd63dd8ebd295dff1e1b749b51e9aaca3e63155b75c735"
	I1210 07:28:21.535460  387266 cri.go:89] found id: "4c160e32d403d540fd0f5076cbbb334cc640351c4da249136985c61c8ddb07f9"
	I1210 07:28:21.535463  387266 cri.go:89] found id: "ea57c044de907492a9a5cd673e06bf24ec59b5559c417648fe898ca3c4fa4e4f"
	I1210 07:28:21.535469  387266 cri.go:89] found id: "010ebc9ab887dac18a762a98373849d39c683302618e70cd88a2ca7e38a5535f"
	I1210 07:28:21.535476  387266 cri.go:89] found id: "bf6af03dc750859dc6c94c9e6d82d48b4876eee71ec2028f9a291deb7578562c"
	I1210 07:28:21.535488  387266 cri.go:89] found id: "cd4a11fe2765202ff0a3776d921d19a73423afa2a34ebc3326d6039c5e6a30d7"
	I1210 07:28:21.535492  387266 cri.go:89] found id: "423282b955e322a79f61f0e6396aaf4856be9146d169ba0017ff6c9344317365"
	I1210 07:28:21.535495  387266 cri.go:89] found id: "6f7abeab2dc46acc945548fb9227342447b975273c96c12c95c3030a8fe43009"
	I1210 07:28:21.535507  387266 cri.go:89] found id: "7e676b17ce03a2d6e29e4632f9bf789ed23b923d4e1656d76c8dc18d9a19f765"
	I1210 07:28:21.535515  387266 cri.go:89] found id: "f47e61eab5569fd31608bd377428e8097fb04caa8d1759ffcd5dafe6a7257e83"
	I1210 07:28:21.535518  387266 cri.go:89] found id: "6f9a84d527a09dd38aac5cf60b6359dfddc676ea260108b8f4f852340f51772f"
	I1210 07:28:21.535522  387266 cri.go:89] found id: ""
	I1210 07:28:21.535575  387266 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 07:28:21.551177  387266 out.go:203] 
	W1210 07:28:21.554209  387266 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:28:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:28:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 07:28:21.554236  387266 out.go:285] * 
	* 
	W1210 07:28:21.559967  387266 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:28:21.562945  387266 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-054300 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.38s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.33s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-59bbv" [ade8773d-173d-4608-b8c1-742e23531297] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004045646s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054300 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054300 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (325.085085ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:28:17.923399  386693 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:28:17.924397  386693 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:28:17.924466  386693 out.go:374] Setting ErrFile to fd 2...
	I1210 07:28:17.924488  386693 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:28:17.924911  386693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:28:17.925336  386693 mustload.go:66] Loading cluster: addons-054300
	I1210 07:28:17.925914  386693 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:28:17.925961  386693 addons.go:622] checking whether the cluster is paused
	I1210 07:28:17.926138  386693 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:28:17.926177  386693 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:28:17.926764  386693 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:28:17.951772  386693 ssh_runner.go:195] Run: systemctl --version
	I1210 07:28:17.951833  386693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:28:17.969746  386693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:28:18.094071  386693 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:28:18.094158  386693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:28:18.152707  386693 cri.go:89] found id: "93c6c5614c0a9a2eb74cabfde767b532e188ff8cd89a94813838268e5804151b"
	I1210 07:28:18.152731  386693 cri.go:89] found id: "2f1be47dacec87c0a56b3cd1eb73c6d247005830f78472193131c56d054bcc4a"
	I1210 07:28:18.152736  386693 cri.go:89] found id: "c796a7524dca5dedee39b3961c5997d3d72f431343a00508663733fc1be6878c"
	I1210 07:28:18.152740  386693 cri.go:89] found id: "31f1c3a5096a8785d7b60bd481c9f3c1cf0709868ce6cb1cc139db75f52c1f42"
	I1210 07:28:18.152743  386693 cri.go:89] found id: "7eebbffe34f04d7ad5bdc3e5c3aef890263f8a90ca947aaeeca6359c36cc1191"
	I1210 07:28:18.152747  386693 cri.go:89] found id: "7513d10c4f49be5d24f91c1199a5db71465badd2bf9a2a1a84bcf9829a3c814e"
	I1210 07:28:18.152750  386693 cri.go:89] found id: "5668f24e462ca639e29b3d82f2cc864344a08ca26a729049642a2924016b13e4"
	I1210 07:28:18.152753  386693 cri.go:89] found id: "03d15632d265591c79e3367a3210fd42b6febf2d5d06f75bf11cf5ba909f8df7"
	I1210 07:28:18.152786  386693 cri.go:89] found id: "066dde63f910d4f04f548debd89f326b5a438ce283264953c1a7666cbeaf01dd"
	I1210 07:28:18.152799  386693 cri.go:89] found id: "607af77f9ee4c14a40adf7d8a6d9170dbdb3216cdeee2e67303dfa6b22ceb2ab"
	I1210 07:28:18.152834  386693 cri.go:89] found id: "b99ece48f039a1adf74d1e66696d0dfe4716da14982d4112abccc91ba77922e1"
	I1210 07:28:18.152838  386693 cri.go:89] found id: "44b34e0a56c7013a6bda4740b0b76cc47606a7950d05f25fc59201a8c1464857"
	I1210 07:28:18.152855  386693 cri.go:89] found id: "b51e7b373a333c5307dd63dd8ebd295dff1e1b749b51e9aaca3e63155b75c735"
	I1210 07:28:18.152862  386693 cri.go:89] found id: "4c160e32d403d540fd0f5076cbbb334cc640351c4da249136985c61c8ddb07f9"
	I1210 07:28:18.152866  386693 cri.go:89] found id: "ea57c044de907492a9a5cd673e06bf24ec59b5559c417648fe898ca3c4fa4e4f"
	I1210 07:28:18.152889  386693 cri.go:89] found id: "010ebc9ab887dac18a762a98373849d39c683302618e70cd88a2ca7e38a5535f"
	I1210 07:28:18.152900  386693 cri.go:89] found id: "bf6af03dc750859dc6c94c9e6d82d48b4876eee71ec2028f9a291deb7578562c"
	I1210 07:28:18.152905  386693 cri.go:89] found id: "cd4a11fe2765202ff0a3776d921d19a73423afa2a34ebc3326d6039c5e6a30d7"
	I1210 07:28:18.152908  386693 cri.go:89] found id: "423282b955e322a79f61f0e6396aaf4856be9146d169ba0017ff6c9344317365"
	I1210 07:28:18.152911  386693 cri.go:89] found id: "6f7abeab2dc46acc945548fb9227342447b975273c96c12c95c3030a8fe43009"
	I1210 07:28:18.152916  386693 cri.go:89] found id: "7e676b17ce03a2d6e29e4632f9bf789ed23b923d4e1656d76c8dc18d9a19f765"
	I1210 07:28:18.152919  386693 cri.go:89] found id: "f47e61eab5569fd31608bd377428e8097fb04caa8d1759ffcd5dafe6a7257e83"
	I1210 07:28:18.152922  386693 cri.go:89] found id: "6f9a84d527a09dd38aac5cf60b6359dfddc676ea260108b8f4f852340f51772f"
	I1210 07:28:18.152925  386693 cri.go:89] found id: ""
	I1210 07:28:18.152990  386693 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 07:28:18.169196  386693 out.go:203] 
	W1210 07:28:18.172025  386693 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:28:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:28:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 07:28:18.172057  386693 out.go:285] * 
	* 
	W1210 07:28:18.177765  386693 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:28:18.181357  386693 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-054300 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.33s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.47s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-054300 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-054300 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-054300 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [c9c9ca5a-073e-41c9-9b7b-b20fc8b28c1e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [c9c9ca5a-073e-41c9-9b7b-b20fc8b28c1e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [c9c9ca5a-073e-41c9-9b7b-b20fc8b28c1e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004369249s
addons_test.go:969: (dbg) Run:  kubectl --context addons-054300 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-054300 ssh "cat /opt/local-path-provisioner/pvc-f752037c-3c31-451d-be8f-825295773e36_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-054300 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-054300 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054300 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054300 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (274.510548ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:28:12.635144  386553 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:28:12.635899  386553 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:28:12.635913  386553 out.go:374] Setting ErrFile to fd 2...
	I1210 07:28:12.635919  386553 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:28:12.636180  386553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:28:12.636496  386553 mustload.go:66] Loading cluster: addons-054300
	I1210 07:28:12.636892  386553 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:28:12.636911  386553 addons.go:622] checking whether the cluster is paused
	I1210 07:28:12.637019  386553 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:28:12.637033  386553 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:28:12.637534  386553 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:28:12.657252  386553 ssh_runner.go:195] Run: systemctl --version
	I1210 07:28:12.657317  386553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:28:12.675722  386553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:28:12.785561  386553 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:28:12.785674  386553 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:28:12.815808  386553 cri.go:89] found id: "93c6c5614c0a9a2eb74cabfde767b532e188ff8cd89a94813838268e5804151b"
	I1210 07:28:12.815833  386553 cri.go:89] found id: "2f1be47dacec87c0a56b3cd1eb73c6d247005830f78472193131c56d054bcc4a"
	I1210 07:28:12.815839  386553 cri.go:89] found id: "c796a7524dca5dedee39b3961c5997d3d72f431343a00508663733fc1be6878c"
	I1210 07:28:12.815843  386553 cri.go:89] found id: "31f1c3a5096a8785d7b60bd481c9f3c1cf0709868ce6cb1cc139db75f52c1f42"
	I1210 07:28:12.815847  386553 cri.go:89] found id: "7eebbffe34f04d7ad5bdc3e5c3aef890263f8a90ca947aaeeca6359c36cc1191"
	I1210 07:28:12.815850  386553 cri.go:89] found id: "7513d10c4f49be5d24f91c1199a5db71465badd2bf9a2a1a84bcf9829a3c814e"
	I1210 07:28:12.815853  386553 cri.go:89] found id: "5668f24e462ca639e29b3d82f2cc864344a08ca26a729049642a2924016b13e4"
	I1210 07:28:12.815856  386553 cri.go:89] found id: "03d15632d265591c79e3367a3210fd42b6febf2d5d06f75bf11cf5ba909f8df7"
	I1210 07:28:12.815859  386553 cri.go:89] found id: "066dde63f910d4f04f548debd89f326b5a438ce283264953c1a7666cbeaf01dd"
	I1210 07:28:12.815865  386553 cri.go:89] found id: "607af77f9ee4c14a40adf7d8a6d9170dbdb3216cdeee2e67303dfa6b22ceb2ab"
	I1210 07:28:12.815868  386553 cri.go:89] found id: "b99ece48f039a1adf74d1e66696d0dfe4716da14982d4112abccc91ba77922e1"
	I1210 07:28:12.815871  386553 cri.go:89] found id: "44b34e0a56c7013a6bda4740b0b76cc47606a7950d05f25fc59201a8c1464857"
	I1210 07:28:12.815874  386553 cri.go:89] found id: "b51e7b373a333c5307dd63dd8ebd295dff1e1b749b51e9aaca3e63155b75c735"
	I1210 07:28:12.815877  386553 cri.go:89] found id: "4c160e32d403d540fd0f5076cbbb334cc640351c4da249136985c61c8ddb07f9"
	I1210 07:28:12.815880  386553 cri.go:89] found id: "ea57c044de907492a9a5cd673e06bf24ec59b5559c417648fe898ca3c4fa4e4f"
	I1210 07:28:12.815885  386553 cri.go:89] found id: "010ebc9ab887dac18a762a98373849d39c683302618e70cd88a2ca7e38a5535f"
	I1210 07:28:12.815893  386553 cri.go:89] found id: "bf6af03dc750859dc6c94c9e6d82d48b4876eee71ec2028f9a291deb7578562c"
	I1210 07:28:12.815897  386553 cri.go:89] found id: "cd4a11fe2765202ff0a3776d921d19a73423afa2a34ebc3326d6039c5e6a30d7"
	I1210 07:28:12.815900  386553 cri.go:89] found id: "423282b955e322a79f61f0e6396aaf4856be9146d169ba0017ff6c9344317365"
	I1210 07:28:12.815903  386553 cri.go:89] found id: "6f7abeab2dc46acc945548fb9227342447b975273c96c12c95c3030a8fe43009"
	I1210 07:28:12.815907  386553 cri.go:89] found id: "7e676b17ce03a2d6e29e4632f9bf789ed23b923d4e1656d76c8dc18d9a19f765"
	I1210 07:28:12.815910  386553 cri.go:89] found id: "f47e61eab5569fd31608bd377428e8097fb04caa8d1759ffcd5dafe6a7257e83"
	I1210 07:28:12.815913  386553 cri.go:89] found id: "6f9a84d527a09dd38aac5cf60b6359dfddc676ea260108b8f4f852340f51772f"
	I1210 07:28:12.815916  386553 cri.go:89] found id: ""
	I1210 07:28:12.815975  386553 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 07:28:12.832087  386553 out.go:203] 
	W1210 07:28:12.835627  386553 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:28:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:28:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 07:28:12.835655  386553 out.go:285] * 
	* 
	W1210 07:28:12.841277  386553 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:28:12.844675  386553 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-054300 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.47s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.36s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-jgw4d" [f73a2f9f-873e-49a1-b56a-3b4e891849be] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003847204s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054300 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054300 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (358.149034ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:28:04.104719  386179 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:28:04.105602  386179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:28:04.105650  386179 out.go:374] Setting ErrFile to fd 2...
	I1210 07:28:04.105674  386179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:28:04.105965  386179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:28:04.106324  386179 mustload.go:66] Loading cluster: addons-054300
	I1210 07:28:04.106797  386179 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:28:04.106886  386179 addons.go:622] checking whether the cluster is paused
	I1210 07:28:04.107064  386179 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:28:04.107101  386179 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:28:04.107630  386179 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:28:04.137804  386179 ssh_runner.go:195] Run: systemctl --version
	I1210 07:28:04.137864  386179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:28:04.169945  386179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:28:04.290184  386179 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:28:04.290282  386179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:28:04.345606  386179 cri.go:89] found id: "93c6c5614c0a9a2eb74cabfde767b532e188ff8cd89a94813838268e5804151b"
	I1210 07:28:04.345630  386179 cri.go:89] found id: "2f1be47dacec87c0a56b3cd1eb73c6d247005830f78472193131c56d054bcc4a"
	I1210 07:28:04.345635  386179 cri.go:89] found id: "c796a7524dca5dedee39b3961c5997d3d72f431343a00508663733fc1be6878c"
	I1210 07:28:04.345639  386179 cri.go:89] found id: "31f1c3a5096a8785d7b60bd481c9f3c1cf0709868ce6cb1cc139db75f52c1f42"
	I1210 07:28:04.345642  386179 cri.go:89] found id: "7eebbffe34f04d7ad5bdc3e5c3aef890263f8a90ca947aaeeca6359c36cc1191"
	I1210 07:28:04.345646  386179 cri.go:89] found id: "7513d10c4f49be5d24f91c1199a5db71465badd2bf9a2a1a84bcf9829a3c814e"
	I1210 07:28:04.345650  386179 cri.go:89] found id: "5668f24e462ca639e29b3d82f2cc864344a08ca26a729049642a2924016b13e4"
	I1210 07:28:04.345654  386179 cri.go:89] found id: "03d15632d265591c79e3367a3210fd42b6febf2d5d06f75bf11cf5ba909f8df7"
	I1210 07:28:04.345657  386179 cri.go:89] found id: "066dde63f910d4f04f548debd89f326b5a438ce283264953c1a7666cbeaf01dd"
	I1210 07:28:04.345664  386179 cri.go:89] found id: "607af77f9ee4c14a40adf7d8a6d9170dbdb3216cdeee2e67303dfa6b22ceb2ab"
	I1210 07:28:04.345668  386179 cri.go:89] found id: "b99ece48f039a1adf74d1e66696d0dfe4716da14982d4112abccc91ba77922e1"
	I1210 07:28:04.345672  386179 cri.go:89] found id: "44b34e0a56c7013a6bda4740b0b76cc47606a7950d05f25fc59201a8c1464857"
	I1210 07:28:04.345675  386179 cri.go:89] found id: "b51e7b373a333c5307dd63dd8ebd295dff1e1b749b51e9aaca3e63155b75c735"
	I1210 07:28:04.345678  386179 cri.go:89] found id: "4c160e32d403d540fd0f5076cbbb334cc640351c4da249136985c61c8ddb07f9"
	I1210 07:28:04.345681  386179 cri.go:89] found id: "ea57c044de907492a9a5cd673e06bf24ec59b5559c417648fe898ca3c4fa4e4f"
	I1210 07:28:04.345686  386179 cri.go:89] found id: "010ebc9ab887dac18a762a98373849d39c683302618e70cd88a2ca7e38a5535f"
	I1210 07:28:04.345694  386179 cri.go:89] found id: "bf6af03dc750859dc6c94c9e6d82d48b4876eee71ec2028f9a291deb7578562c"
	I1210 07:28:04.345697  386179 cri.go:89] found id: "cd4a11fe2765202ff0a3776d921d19a73423afa2a34ebc3326d6039c5e6a30d7"
	I1210 07:28:04.345700  386179 cri.go:89] found id: "423282b955e322a79f61f0e6396aaf4856be9146d169ba0017ff6c9344317365"
	I1210 07:28:04.345703  386179 cri.go:89] found id: "6f7abeab2dc46acc945548fb9227342447b975273c96c12c95c3030a8fe43009"
	I1210 07:28:04.345708  386179 cri.go:89] found id: "7e676b17ce03a2d6e29e4632f9bf789ed23b923d4e1656d76c8dc18d9a19f765"
	I1210 07:28:04.345711  386179 cri.go:89] found id: "f47e61eab5569fd31608bd377428e8097fb04caa8d1759ffcd5dafe6a7257e83"
	I1210 07:28:04.345714  386179 cri.go:89] found id: "6f9a84d527a09dd38aac5cf60b6359dfddc676ea260108b8f4f852340f51772f"
	I1210 07:28:04.345725  386179 cri.go:89] found id: ""
	I1210 07:28:04.345785  386179 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 07:28:04.364022  386179 out.go:203] 
	W1210 07:28:04.366966  386179 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:28:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:28:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 07:28:04.366997  386179 out.go:285] * 
	* 
	W1210 07:28:04.373186  386179 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:28:04.376190  386179 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-054300 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.36s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-r7798" [85672480-6f79-41e1-a333-0ff0331eca5d] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003283262s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-054300 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-054300 addons disable yakd --alsologtostderr -v=1: exit status 11 (289.085393ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:27:57.786436  386110 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:27:57.787523  386110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:27:57.787540  386110 out.go:374] Setting ErrFile to fd 2...
	I1210 07:27:57.787651  386110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:27:57.787939  386110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:27:57.788255  386110 mustload.go:66] Loading cluster: addons-054300
	I1210 07:27:57.789010  386110 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:27:57.789037  386110 addons.go:622] checking whether the cluster is paused
	I1210 07:27:57.789166  386110 config.go:182] Loaded profile config "addons-054300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:27:57.789257  386110 host.go:66] Checking if "addons-054300" exists ...
	I1210 07:27:57.789832  386110 cli_runner.go:164] Run: docker container inspect addons-054300 --format={{.State.Status}}
	I1210 07:27:57.829262  386110 ssh_runner.go:195] Run: systemctl --version
	I1210 07:27:57.829320  386110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-054300
	I1210 07:27:57.853595  386110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/addons-054300/id_rsa Username:docker}
	I1210 07:27:57.949685  386110 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:27:57.949804  386110 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:27:57.980188  386110 cri.go:89] found id: "93c6c5614c0a9a2eb74cabfde767b532e188ff8cd89a94813838268e5804151b"
	I1210 07:27:57.980260  386110 cri.go:89] found id: "2f1be47dacec87c0a56b3cd1eb73c6d247005830f78472193131c56d054bcc4a"
	I1210 07:27:57.980273  386110 cri.go:89] found id: "c796a7524dca5dedee39b3961c5997d3d72f431343a00508663733fc1be6878c"
	I1210 07:27:57.980278  386110 cri.go:89] found id: "31f1c3a5096a8785d7b60bd481c9f3c1cf0709868ce6cb1cc139db75f52c1f42"
	I1210 07:27:57.980282  386110 cri.go:89] found id: "7eebbffe34f04d7ad5bdc3e5c3aef890263f8a90ca947aaeeca6359c36cc1191"
	I1210 07:27:57.980286  386110 cri.go:89] found id: "7513d10c4f49be5d24f91c1199a5db71465badd2bf9a2a1a84bcf9829a3c814e"
	I1210 07:27:57.980289  386110 cri.go:89] found id: "5668f24e462ca639e29b3d82f2cc864344a08ca26a729049642a2924016b13e4"
	I1210 07:27:57.980293  386110 cri.go:89] found id: "03d15632d265591c79e3367a3210fd42b6febf2d5d06f75bf11cf5ba909f8df7"
	I1210 07:27:57.980296  386110 cri.go:89] found id: "066dde63f910d4f04f548debd89f326b5a438ce283264953c1a7666cbeaf01dd"
	I1210 07:27:57.980303  386110 cri.go:89] found id: "607af77f9ee4c14a40adf7d8a6d9170dbdb3216cdeee2e67303dfa6b22ceb2ab"
	I1210 07:27:57.980310  386110 cri.go:89] found id: "b99ece48f039a1adf74d1e66696d0dfe4716da14982d4112abccc91ba77922e1"
	I1210 07:27:57.980319  386110 cri.go:89] found id: "44b34e0a56c7013a6bda4740b0b76cc47606a7950d05f25fc59201a8c1464857"
	I1210 07:27:57.980323  386110 cri.go:89] found id: "b51e7b373a333c5307dd63dd8ebd295dff1e1b749b51e9aaca3e63155b75c735"
	I1210 07:27:57.980326  386110 cri.go:89] found id: "4c160e32d403d540fd0f5076cbbb334cc640351c4da249136985c61c8ddb07f9"
	I1210 07:27:57.980330  386110 cri.go:89] found id: "ea57c044de907492a9a5cd673e06bf24ec59b5559c417648fe898ca3c4fa4e4f"
	I1210 07:27:57.980335  386110 cri.go:89] found id: "010ebc9ab887dac18a762a98373849d39c683302618e70cd88a2ca7e38a5535f"
	I1210 07:27:57.980341  386110 cri.go:89] found id: "bf6af03dc750859dc6c94c9e6d82d48b4876eee71ec2028f9a291deb7578562c"
	I1210 07:27:57.980352  386110 cri.go:89] found id: "cd4a11fe2765202ff0a3776d921d19a73423afa2a34ebc3326d6039c5e6a30d7"
	I1210 07:27:57.980355  386110 cri.go:89] found id: "423282b955e322a79f61f0e6396aaf4856be9146d169ba0017ff6c9344317365"
	I1210 07:27:57.980359  386110 cri.go:89] found id: "6f7abeab2dc46acc945548fb9227342447b975273c96c12c95c3030a8fe43009"
	I1210 07:27:57.980363  386110 cri.go:89] found id: "7e676b17ce03a2d6e29e4632f9bf789ed23b923d4e1656d76c8dc18d9a19f765"
	I1210 07:27:57.980369  386110 cri.go:89] found id: "f47e61eab5569fd31608bd377428e8097fb04caa8d1759ffcd5dafe6a7257e83"
	I1210 07:27:57.980373  386110 cri.go:89] found id: "6f9a84d527a09dd38aac5cf60b6359dfddc676ea260108b8f4f852340f51772f"
	I1210 07:27:57.980388  386110 cri.go:89] found id: ""
	I1210 07:27:57.980444  386110 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 07:27:58.000119  386110 out.go:203] 
	W1210 07:27:58.003825  386110 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:27:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:27:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 07:27:58.003895  386110 out.go:285] * 
	* 
	W1210 07:27:58.009832  386110 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:27:58.013188  386110 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-054300 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (501.79s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-314220 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1210 07:37:27.801015  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:37:55.509523  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:40:04.883226  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:40:04.889615  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:40:04.901101  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:40:04.922606  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:40:04.964160  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:40:05.045712  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:40:05.207298  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:40:05.529041  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:40:06.171202  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:40:07.452534  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:40:10.014383  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:40:15.135992  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:40:25.377431  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:40:45.858917  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:41:26.821608  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:42:27.800553  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:42:48.743120  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-314220 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.363197696s)

                                                
                                                
-- stdout --
	* [functional-314220] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-314220" primary control-plane node in "functional-314220" cluster
	* Pulling base image v0.0.48-1765319469-22089 ...
	* Found network options:
	  - HTTP_PROXY=localhost:39837
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:39837 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-314220 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-314220 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001289271s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000243388s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000243388s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-314220 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-314220
helpers_test.go:244: (dbg) docker inspect functional-314220:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	        "Created": "2025-12-10T07:35:53.582567333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 407506,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:35:53.640615938Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hostname",
	        "HostsPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hosts",
	        "LogPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30-json.log",
	        "Name": "/functional-314220",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-314220:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-314220",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	                "LowerDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35-init/diff:/var/lib/docker/overlay2/888a54fd4518421bb2e30bc2f0825f232bffa2f260991e8ba288270662a6554b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-314220",
	                "Source": "/var/lib/docker/volumes/functional-314220/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-314220",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-314220",
	                "name.minikube.sigs.k8s.io": "functional-314220",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a7384df8731cf8cd48ab1dfb3eb79fa2c44cd426e6255cad7345dab2e19d0bfd",
	            "SandboxKey": "/var/run/docker/netns/a7384df8731c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-314220": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:6b:8c:59:cb:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "854df014d6f98ea92f9938b53e3af35a6df7a455941ebb6e97efa575a4d35230",
	                    "EndpointID": "285d153b9d616bc3d729ab764c8a3d3c5b28bb20787fa60a80cbf50869c0c24b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-314220",
	                        "82cb4926192d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220: exit status 6 (306.825171ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:44:09.331821  412668 status.go:458] kubeconfig endpoint: get endpoint: "functional-314220" does not appear in /home/jenkins/minikube-integration/22089-376671/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-446865 ssh sudo umount -f /mount-9p                                                                                                    │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ ssh            │ functional-446865 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ mount          │ -p functional-446865 /tmp/TestFunctionalparallelMountCmdspecific-port2634105139/001:/mount-9p --alsologtostderr -v=1 --port 46464                 │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ ssh            │ functional-446865 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ ssh            │ functional-446865 ssh -- ls -la /mount-9p                                                                                                         │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ ssh            │ functional-446865 ssh sudo umount -f /mount-9p                                                                                                    │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ mount          │ -p functional-446865 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1456434196/001:/mount1 --alsologtostderr -v=1                                │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ mount          │ -p functional-446865 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1456434196/001:/mount2 --alsologtostderr -v=1                                │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ ssh            │ functional-446865 ssh findmnt -T /mount1                                                                                                          │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ mount          │ -p functional-446865 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1456434196/001:/mount3 --alsologtostderr -v=1                                │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ ssh            │ functional-446865 ssh findmnt -T /mount2                                                                                                          │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ ssh            │ functional-446865 ssh findmnt -T /mount3                                                                                                          │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ mount          │ -p functional-446865 --kill=true                                                                                                                  │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ update-context │ functional-446865 update-context --alsologtostderr -v=2                                                                                           │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ update-context │ functional-446865 update-context --alsologtostderr -v=2                                                                                           │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ update-context │ functional-446865 update-context --alsologtostderr -v=2                                                                                           │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image          │ functional-446865 image ls --format short --alsologtostderr                                                                                       │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image          │ functional-446865 image ls --format yaml --alsologtostderr                                                                                        │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ ssh            │ functional-446865 ssh pgrep buildkitd                                                                                                             │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ image          │ functional-446865 image build -t localhost/my-image:functional-446865 testdata/build --alsologtostderr                                            │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image          │ functional-446865 image ls --format json --alsologtostderr                                                                                        │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image          │ functional-446865 image ls                                                                                                                        │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image          │ functional-446865 image ls --format table --alsologtostderr                                                                                       │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ delete         │ -p functional-446865                                                                                                                              │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ start          │ -p functional-314220 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:35:48
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:35:48.696508  407117 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:35:48.696681  407117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:35:48.696685  407117 out.go:374] Setting ErrFile to fd 2...
	I1210 07:35:48.696689  407117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:35:48.696961  407117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:35:48.697396  407117 out.go:368] Setting JSON to false
	I1210 07:35:48.698245  407117 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8299,"bootTime":1765343850,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:35:48.698303  407117 start.go:143] virtualization:  
	I1210 07:35:48.702780  407117 out.go:179] * [functional-314220] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:35:48.707503  407117 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:35:48.707585  407117 notify.go:221] Checking for updates...
	I1210 07:35:48.714510  407117 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:35:48.717776  407117 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:35:48.720859  407117 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 07:35:48.723901  407117 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:35:48.727140  407117 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:35:48.730562  407117 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:35:48.759136  407117 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:35:48.759251  407117 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:35:48.816609  407117 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-10 07:35:48.807465897 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:35:48.816709  407117 docker.go:319] overlay module found
	I1210 07:35:48.819926  407117 out.go:179] * Using the docker driver based on user configuration
	I1210 07:35:48.822940  407117 start.go:309] selected driver: docker
	I1210 07:35:48.822947  407117 start.go:927] validating driver "docker" against <nil>
	I1210 07:35:48.822959  407117 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:35:48.823736  407117 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:35:48.873535  407117 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-10 07:35:48.863875056 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:35:48.873680  407117 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:35:48.873904  407117 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:35:48.877001  407117 out.go:179] * Using Docker driver with root privileges
	I1210 07:35:48.880006  407117 cni.go:84] Creating CNI manager for ""
	I1210 07:35:48.880068  407117 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:35:48.880075  407117 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 07:35:48.880162  407117 start.go:353] cluster config:
	{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:35:48.883422  407117 out.go:179] * Starting "functional-314220" primary control-plane node in "functional-314220" cluster
	I1210 07:35:48.886347  407117 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 07:35:48.889323  407117 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:35:48.892238  407117 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:35:48.892279  407117 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1210 07:35:48.892305  407117 cache.go:65] Caching tarball of preloaded images
	I1210 07:35:48.892316  407117 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:35:48.892397  407117 preload.go:238] Found /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1210 07:35:48.892406  407117 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 07:35:48.892748  407117 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/config.json ...
	I1210 07:35:48.892765  407117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/config.json: {Name:mk01be504157ad2ab11aa94366e3f8fbd920d565 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:35:48.911252  407117 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:35:48.911263  407117 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:35:48.911283  407117 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:35:48.911306  407117 start.go:360] acquireMachinesLock for functional-314220: {Name:mk24b69a1578b6f8eb2fecd1b5e1ddb99787d8b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:35:48.911422  407117 start.go:364] duration metric: took 100.236µs to acquireMachinesLock for "functional-314220"
	I1210 07:35:48.911452  407117 start.go:93] Provisioning new machine with config: &{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 07:35:48.911511  407117 start.go:125] createHost starting for "" (driver="docker")
	I1210 07:35:48.914923  407117 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1210 07:35:48.915246  407117 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:39837 to docker env.
	I1210 07:35:48.915271  407117 start.go:159] libmachine.API.Create for "functional-314220" (driver="docker")
	I1210 07:35:48.915297  407117 client.go:173] LocalClient.Create starting
	I1210 07:35:48.915369  407117 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem
	I1210 07:35:48.915402  407117 main.go:143] libmachine: Decoding PEM data...
	I1210 07:35:48.915419  407117 main.go:143] libmachine: Parsing certificate...
	I1210 07:35:48.915477  407117 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem
	I1210 07:35:48.915494  407117 main.go:143] libmachine: Decoding PEM data...
	I1210 07:35:48.915504  407117 main.go:143] libmachine: Parsing certificate...
	I1210 07:35:48.915854  407117 cli_runner.go:164] Run: docker network inspect functional-314220 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 07:35:48.932782  407117 cli_runner.go:211] docker network inspect functional-314220 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 07:35:48.932857  407117 network_create.go:284] running [docker network inspect functional-314220] to gather additional debugging logs...
	I1210 07:35:48.932872  407117 cli_runner.go:164] Run: docker network inspect functional-314220
	W1210 07:35:48.949090  407117 cli_runner.go:211] docker network inspect functional-314220 returned with exit code 1
	I1210 07:35:48.949111  407117 network_create.go:287] error running [docker network inspect functional-314220]: docker network inspect functional-314220: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-314220 not found
	I1210 07:35:48.949142  407117 network_create.go:289] output of [docker network inspect functional-314220]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-314220 not found
	
	** /stderr **
	I1210 07:35:48.949256  407117 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:35:48.965752  407117 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018f39a0}
	I1210 07:35:48.965785  407117 network_create.go:124] attempt to create docker network functional-314220 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1210 07:35:48.965844  407117 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-314220 functional-314220
	I1210 07:35:49.048545  407117 network_create.go:108] docker network functional-314220 192.168.49.0/24 created
	I1210 07:35:49.048569  407117 kic.go:121] calculated static IP "192.168.49.2" for the "functional-314220" container
	I1210 07:35:49.048645  407117 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 07:35:49.064615  407117 cli_runner.go:164] Run: docker volume create functional-314220 --label name.minikube.sigs.k8s.io=functional-314220 --label created_by.minikube.sigs.k8s.io=true
	I1210 07:35:49.083000  407117 oci.go:103] Successfully created a docker volume functional-314220
	I1210 07:35:49.083163  407117 cli_runner.go:164] Run: docker run --rm --name functional-314220-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-314220 --entrypoint /usr/bin/test -v functional-314220:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 07:35:49.595080  407117 oci.go:107] Successfully prepared a docker volume functional-314220
	I1210 07:35:49.595146  407117 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:35:49.595154  407117 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 07:35:49.595227  407117 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-314220:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 07:35:53.507733  407117 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-314220:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (3.91247254s)
	I1210 07:35:53.507762  407117 kic.go:203] duration metric: took 3.91260563s to extract preloaded images to volume ...
	W1210 07:35:53.507902  407117 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 07:35:53.508002  407117 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 07:35:53.567745  407117 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-314220 --name functional-314220 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-314220 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-314220 --network functional-314220 --ip 192.168.49.2 --volume functional-314220:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	I1210 07:35:53.860669  407117 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Running}}
	I1210 07:35:53.887787  407117 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:35:53.908786  407117 cli_runner.go:164] Run: docker exec functional-314220 stat /var/lib/dpkg/alternatives/iptables
	I1210 07:35:53.964705  407117 oci.go:144] the created container "functional-314220" has a running status.
	I1210 07:35:53.964724  407117 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa...
	I1210 07:35:54.162263  407117 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 07:35:54.200768  407117 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:35:54.223105  407117 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 07:35:54.223116  407117 kic_runner.go:114] Args: [docker exec --privileged functional-314220 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 07:35:54.291277  407117 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:35:54.322346  407117 machine.go:94] provisionDockerMachine start ...
	I1210 07:35:54.322458  407117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:35:54.351297  407117 main.go:143] libmachine: Using SSH client type: native
	I1210 07:35:54.351635  407117 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:35:54.351644  407117 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:35:54.352253  407117 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 07:35:57.486647  407117 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-314220
	
	I1210 07:35:57.486662  407117 ubuntu.go:182] provisioning hostname "functional-314220"
	I1210 07:35:57.486732  407117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:35:57.504157  407117 main.go:143] libmachine: Using SSH client type: native
	I1210 07:35:57.504463  407117 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:35:57.504472  407117 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-314220 && echo "functional-314220" | sudo tee /etc/hostname
	I1210 07:35:57.647996  407117 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-314220
	
	I1210 07:35:57.648066  407117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:35:57.664902  407117 main.go:143] libmachine: Using SSH client type: native
	I1210 07:35:57.665228  407117 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:35:57.665241  407117 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-314220' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-314220/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-314220' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:35:57.799300  407117 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:35:57.799316  407117 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-376671/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-376671/.minikube}
	I1210 07:35:57.799340  407117 ubuntu.go:190] setting up certificates
	I1210 07:35:57.799348  407117 provision.go:84] configureAuth start
	I1210 07:35:57.799406  407117 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:35:57.816541  407117 provision.go:143] copyHostCerts
	I1210 07:35:57.816601  407117 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem, removing ...
	I1210 07:35:57.816609  407117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem
	I1210 07:35:57.816684  407117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem (1082 bytes)
	I1210 07:35:57.816770  407117 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem, removing ...
	I1210 07:35:57.816773  407117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem
	I1210 07:35:57.816796  407117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem (1123 bytes)
	I1210 07:35:57.816842  407117 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem, removing ...
	I1210 07:35:57.816845  407117 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem
	I1210 07:35:57.816866  407117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem (1675 bytes)
	I1210 07:35:57.816906  407117 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem org=jenkins.functional-314220 san=[127.0.0.1 192.168.49.2 functional-314220 localhost minikube]
	I1210 07:35:57.915819  407117 provision.go:177] copyRemoteCerts
	I1210 07:35:57.915882  407117 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:35:57.915925  407117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:35:57.937806  407117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:35:58.035367  407117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:35:58.053979  407117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:35:58.071550  407117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:35:58.089143  407117 provision.go:87] duration metric: took 289.760797ms to configureAuth
	I1210 07:35:58.089162  407117 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:35:58.089359  407117 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:35:58.089459  407117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:35:58.107379  407117 main.go:143] libmachine: Using SSH client type: native
	I1210 07:35:58.107689  407117 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:35:58.107700  407117 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 07:35:58.391411  407117 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 07:35:58.391427  407117 machine.go:97] duration metric: took 4.069049754s to provisionDockerMachine
	I1210 07:35:58.391436  407117 client.go:176] duration metric: took 9.47613408s to LocalClient.Create
	I1210 07:35:58.391449  407117 start.go:167] duration metric: took 9.476179324s to libmachine.API.Create "functional-314220"
	I1210 07:35:58.391455  407117 start.go:293] postStartSetup for "functional-314220" (driver="docker")
	I1210 07:35:58.391481  407117 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:35:58.391547  407117 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:35:58.391592  407117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:35:58.409437  407117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:35:58.507252  407117 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:35:58.510837  407117 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:35:58.510856  407117 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:35:58.510867  407117 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/addons for local assets ...
	I1210 07:35:58.510925  407117 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/files for local assets ...
	I1210 07:35:58.511047  407117 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> 3785282.pem in /etc/ssl/certs
	I1210 07:35:58.511131  407117 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts -> hosts in /etc/test/nested/copy/378528
	I1210 07:35:58.511179  407117 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/378528
	I1210 07:35:58.519065  407117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 07:35:58.538318  407117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts --> /etc/test/nested/copy/378528/hosts (40 bytes)
	I1210 07:35:58.556667  407117 start.go:296] duration metric: took 165.197533ms for postStartSetup
	I1210 07:35:58.557085  407117 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:35:58.574930  407117 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/config.json ...
	I1210 07:35:58.575283  407117 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:35:58.575336  407117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:35:58.594164  407117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:35:58.688054  407117 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:35:58.692861  407117 start.go:128] duration metric: took 9.781335138s to createHost
	I1210 07:35:58.692876  407117 start.go:83] releasing machines lock for "functional-314220", held for 9.781447592s
	I1210 07:35:58.692953  407117 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:35:58.710222  407117 out.go:179] * Found network options:
	I1210 07:35:58.713413  407117 out.go:179]   - HTTP_PROXY=localhost:39837
	W1210 07:35:58.716386  407117 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1210 07:35:58.719235  407117 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1210 07:35:58.722149  407117 ssh_runner.go:195] Run: cat /version.json
	I1210 07:35:58.722203  407117 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:35:58.722211  407117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:35:58.722264  407117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:35:58.740814  407117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:35:58.742111  407117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:35:58.921128  407117 ssh_runner.go:195] Run: systemctl --version
	I1210 07:35:58.927554  407117 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 07:35:58.965802  407117 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:35:58.969951  407117 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:35:58.970013  407117 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:35:58.998071  407117 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 07:35:58.998087  407117 start.go:496] detecting cgroup driver to use...
	I1210 07:35:58.998126  407117 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:35:58.998183  407117 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:35:59.018610  407117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:35:59.031258  407117 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:35:59.031316  407117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:35:59.049335  407117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:35:59.068291  407117 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:35:59.181164  407117 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:35:59.308396  407117 docker.go:234] disabling docker service ...
	I1210 07:35:59.308458  407117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:35:59.330305  407117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:35:59.343768  407117 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:35:59.456721  407117 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:35:59.568345  407117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:35:59.581196  407117 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:35:59.595538  407117 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 07:35:59.595598  407117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:35:59.604325  407117 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 07:35:59.604384  407117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:35:59.613091  407117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:35:59.621658  407117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:35:59.630299  407117 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:35:59.638207  407117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:35:59.646920  407117 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:35:59.660474  407117 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:35:59.669142  407117 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:35:59.676643  407117 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:35:59.684085  407117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:35:59.806836  407117 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 07:35:59.978481  407117 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 07:35:59.978541  407117 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 07:35:59.982110  407117 start.go:564] Will wait 60s for crictl version
	I1210 07:35:59.982195  407117 ssh_runner.go:195] Run: which crictl
	I1210 07:35:59.985571  407117 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:36:00.048441  407117 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 07:36:00.048534  407117 ssh_runner.go:195] Run: crio --version
	I1210 07:36:00.137407  407117 ssh_runner.go:195] Run: crio --version
	I1210 07:36:00.213817  407117 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 07:36:00.217947  407117 cli_runner.go:164] Run: docker network inspect functional-314220 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:36:00.270703  407117 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 07:36:00.278664  407117 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:36:00.321364  407117 kubeadm.go:884] updating cluster {Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:36:00.321495  407117 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:36:00.321564  407117 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:36:00.382950  407117 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:36:00.382964  407117 crio.go:433] Images already preloaded, skipping extraction
	I1210 07:36:00.383070  407117 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:36:00.448866  407117 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:36:00.448881  407117 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:36:00.448889  407117 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1210 07:36:00.449011  407117 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-314220 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:36:00.449140  407117 ssh_runner.go:195] Run: crio config
	I1210 07:36:00.523486  407117 cni.go:84] Creating CNI manager for ""
	I1210 07:36:00.523497  407117 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:36:00.523517  407117 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:36:00.523540  407117 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-314220 NodeName:functional-314220 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:36:00.523671  407117 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-314220"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:36:00.523752  407117 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:36:00.533096  407117 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:36:00.533190  407117 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:36:00.542333  407117 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 07:36:00.558783  407117 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:36:00.573174  407117 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1210 07:36:00.588228  407117 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:36:00.592392  407117 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 07:36:00.603719  407117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:36:00.727193  407117 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:36:00.743767  407117 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220 for IP: 192.168.49.2
	I1210 07:36:00.743779  407117 certs.go:195] generating shared ca certs ...
	I1210 07:36:00.743803  407117 certs.go:227] acquiring lock for ca certs: {Name:mk8f0c12407c084bd20b81428ac0dabca9a8cbd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:36:00.743977  407117 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key
	I1210 07:36:00.744036  407117 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key
	I1210 07:36:00.744043  407117 certs.go:257] generating profile certs ...
	I1210 07:36:00.744103  407117 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key
	I1210 07:36:00.744117  407117 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt with IP's: []
	I1210 07:36:01.107425  407117 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt ...
	I1210 07:36:01.107443  407117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: {Name:mkab67c1d3f918f78e37ca9bc23cf6fff37830d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:36:01.107657  407117 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key ...
	I1210 07:36:01.107664  407117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key: {Name:mka07564f1864d21dd75f9f10eba6666b0cd31de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:36:01.107759  407117 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key.8ae59347
	I1210 07:36:01.107771  407117 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.crt.8ae59347 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1210 07:36:01.557985  407117 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.crt.8ae59347 ...
	I1210 07:36:01.558001  407117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.crt.8ae59347: {Name:mkcad3727bcc3505d1f96f1e095011072120390f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:36:01.558196  407117 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key.8ae59347 ...
	I1210 07:36:01.558203  407117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key.8ae59347: {Name:mkbd124c61a3ba0d2fab1e3c7423f3941938b9aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:36:01.558289  407117 certs.go:382] copying /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.crt.8ae59347 -> /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.crt
	I1210 07:36:01.558365  407117 certs.go:386] copying /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key.8ae59347 -> /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key
	I1210 07:36:01.558417  407117 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key
	I1210 07:36:01.558438  407117 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.crt with IP's: []
	I1210 07:36:01.780244  407117 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.crt ...
	I1210 07:36:01.780260  407117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.crt: {Name:mk52ceab056909495df2356053fa82579df8d7d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:36:01.780441  407117 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key ...
	I1210 07:36:01.780449  407117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key: {Name:mkf069791fa3099875e78dce84e0ba5aff428309 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:36:01.780648  407117 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem (1338 bytes)
	W1210 07:36:01.780688  407117 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528_empty.pem, impossibly tiny 0 bytes
	I1210 07:36:01.780695  407117 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:36:01.780720  407117 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:36:01.780742  407117 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:36:01.780766  407117 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem (1675 bytes)
	I1210 07:36:01.780811  407117 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 07:36:01.781425  407117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:36:01.798763  407117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 07:36:01.816880  407117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:36:01.834741  407117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:36:01.852992  407117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:36:01.871161  407117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:36:01.888945  407117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:36:01.906803  407117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:36:01.924457  407117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem --> /usr/share/ca-certificates/378528.pem (1338 bytes)
	I1210 07:36:01.942350  407117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /usr/share/ca-certificates/3785282.pem (1708 bytes)
	I1210 07:36:01.960246  407117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:36:01.978175  407117 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:36:01.990982  407117 ssh_runner.go:195] Run: openssl version
	I1210 07:36:01.997905  407117 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/378528.pem
	I1210 07:36:02.007175  407117 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/378528.pem /etc/ssl/certs/378528.pem
	I1210 07:36:02.015106  407117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378528.pem
	I1210 07:36:02.019064  407117 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 07:35 /usr/share/ca-certificates/378528.pem
	I1210 07:36:02.019130  407117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378528.pem
	I1210 07:36:02.060839  407117 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:36:02.068145  407117 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/378528.pem /etc/ssl/certs/51391683.0
	I1210 07:36:02.075318  407117 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3785282.pem
	I1210 07:36:02.082655  407117 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3785282.pem /etc/ssl/certs/3785282.pem
	I1210 07:36:02.090163  407117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3785282.pem
	I1210 07:36:02.093992  407117 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 07:35 /usr/share/ca-certificates/3785282.pem
	I1210 07:36:02.094051  407117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3785282.pem
	I1210 07:36:02.135164  407117 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:36:02.142719  407117 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3785282.pem /etc/ssl/certs/3ec20f2e.0
	I1210 07:36:02.150399  407117 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:36:02.157991  407117 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:36:02.165936  407117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:36:02.170624  407117 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 07:25 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:36:02.170697  407117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:36:02.212899  407117 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:36:02.220791  407117 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 07:36:02.228740  407117 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:36:02.235823  407117 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 07:36:02.235866  407117 kubeadm.go:401] StartCluster: {Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:36:02.235933  407117 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:36:02.235988  407117 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:36:02.263604  407117 cri.go:89] found id: ""
	I1210 07:36:02.263665  407117 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:36:02.271595  407117 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:36:02.279647  407117 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:36:02.279705  407117 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:36:02.287465  407117 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:36:02.287474  407117 kubeadm.go:158] found existing configuration files:
	
	I1210 07:36:02.287535  407117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 07:36:02.295325  407117 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:36:02.295387  407117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:36:02.302885  407117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 07:36:02.310907  407117 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:36:02.310963  407117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:36:02.318703  407117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 07:36:02.326381  407117 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:36:02.326447  407117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:36:02.334064  407117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 07:36:02.341932  407117 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:36:02.341995  407117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:36:02.349774  407117 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:36:02.387720  407117 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:36:02.388029  407117 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:36:02.460462  407117 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:36:02.460561  407117 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:36:02.460601  407117 kubeadm.go:319] OS: Linux
	I1210 07:36:02.460650  407117 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:36:02.460697  407117 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:36:02.460743  407117 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:36:02.460790  407117 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:36:02.460837  407117 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:36:02.460885  407117 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:36:02.460938  407117 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:36:02.460984  407117 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:36:02.461029  407117 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:36:02.532122  407117 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:36:02.532232  407117 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:36:02.532323  407117 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:36:02.540222  407117 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:36:02.546474  407117 out.go:252]   - Generating certificates and keys ...
	I1210 07:36:02.546568  407117 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:36:02.546631  407117 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:36:02.675160  407117 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 07:36:03.301170  407117 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 07:36:03.403735  407117 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 07:36:03.987955  407117 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 07:36:04.188862  407117 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 07:36:04.189018  407117 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-314220 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 07:36:04.279314  407117 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 07:36:04.279473  407117 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-314220 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1210 07:36:04.545055  407117 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 07:36:04.836142  407117 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 07:36:04.923533  407117 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 07:36:04.923843  407117 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:36:05.495241  407117 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:36:05.674682  407117 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:36:05.983353  407117 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:36:06.198217  407117 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:36:06.409180  407117 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:36:06.409840  407117 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:36:06.412457  407117 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:36:06.418351  407117 out.go:252]   - Booting up control plane ...
	I1210 07:36:06.418450  407117 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:36:06.418522  407117 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:36:06.418583  407117 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:36:06.433449  407117 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:36:06.433726  407117 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:36:06.441396  407117 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:36:06.441916  407117 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:36:06.442134  407117 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:36:06.575542  407117 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:36:06.575651  407117 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:40:06.577416  407117 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001289271s
	I1210 07:40:06.577442  407117 kubeadm.go:319] 
	I1210 07:40:06.577499  407117 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:40:06.577531  407117 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:40:06.577710  407117 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:40:06.577725  407117 kubeadm.go:319] 
	I1210 07:40:06.577849  407117 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:40:06.577888  407117 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:40:06.577930  407117 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:40:06.577933  407117 kubeadm.go:319] 
	I1210 07:40:06.581673  407117 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:40:06.582160  407117 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:40:06.582300  407117 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:40:06.582598  407117 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:40:06.582604  407117 kubeadm.go:319] 
	I1210 07:40:06.582682  407117 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:40:06.582814  407117 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-314220 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-314220 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001289271s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:40:06.582933  407117 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 07:40:07.002916  407117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:40:07.016782  407117 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:40:07.016838  407117 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:40:07.024962  407117 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:40:07.024972  407117 kubeadm.go:158] found existing configuration files:
	
	I1210 07:40:07.025022  407117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 07:40:07.032850  407117 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:40:07.032909  407117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:40:07.040711  407117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 07:40:07.048505  407117 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:40:07.048571  407117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:40:07.056195  407117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 07:40:07.063835  407117 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:40:07.063899  407117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:40:07.071340  407117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 07:40:07.078755  407117 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:40:07.078810  407117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:40:07.086182  407117 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:40:07.127526  407117 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:40:07.127763  407117 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:40:07.201670  407117 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:40:07.201752  407117 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:40:07.201793  407117 kubeadm.go:319] OS: Linux
	I1210 07:40:07.201837  407117 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:40:07.201890  407117 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:40:07.201936  407117 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:40:07.201982  407117 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:40:07.202029  407117 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:40:07.202075  407117 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:40:07.202119  407117 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:40:07.202165  407117 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:40:07.202209  407117 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:40:07.273684  407117 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:40:07.273840  407117 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:40:07.273958  407117 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:40:07.287550  407117 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:40:07.292819  407117 out.go:252]   - Generating certificates and keys ...
	I1210 07:40:07.292913  407117 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:40:07.292990  407117 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:40:07.293083  407117 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:40:07.293144  407117 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:40:07.293232  407117 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:40:07.293290  407117 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:40:07.293349  407117 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:40:07.293414  407117 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:40:07.293490  407117 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:40:07.293563  407117 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:40:07.293604  407117 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:40:07.293667  407117 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:40:07.362688  407117 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:40:07.654248  407117 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:40:07.939765  407117 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:40:08.167550  407117 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:40:08.396887  407117 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:40:08.397477  407117 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:40:08.400356  407117 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:40:08.403605  407117 out.go:252]   - Booting up control plane ...
	I1210 07:40:08.403707  407117 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:40:08.403790  407117 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:40:08.404607  407117 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:40:08.420015  407117 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:40:08.420113  407117 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:40:08.428800  407117 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:40:08.429568  407117 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:40:08.430619  407117 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:40:08.565454  407117 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:40:08.565567  407117 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:44:08.565738  407117 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000243388s
	I1210 07:44:08.565760  407117 kubeadm.go:319] 
	I1210 07:44:08.565815  407117 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:44:08.565847  407117 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:44:08.565949  407117 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:44:08.565952  407117 kubeadm.go:319] 
	I1210 07:44:08.566054  407117 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:44:08.566085  407117 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:44:08.566114  407117 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:44:08.566117  407117 kubeadm.go:319] 
	I1210 07:44:08.570870  407117 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:44:08.571301  407117 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:44:08.571409  407117 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:44:08.571644  407117 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:44:08.571649  407117 kubeadm.go:319] 
	I1210 07:44:08.571716  407117 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 07:44:08.571764  407117 kubeadm.go:403] duration metric: took 8m6.335901543s to StartCluster
	I1210 07:44:08.571809  407117 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:44:08.571872  407117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:44:08.598442  407117 cri.go:89] found id: ""
	I1210 07:44:08.598456  407117 logs.go:282] 0 containers: []
	W1210 07:44:08.598463  407117 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:44:08.598468  407117 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:44:08.598531  407117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:44:08.624896  407117 cri.go:89] found id: ""
	I1210 07:44:08.624910  407117 logs.go:282] 0 containers: []
	W1210 07:44:08.624917  407117 logs.go:284] No container was found matching "etcd"
	I1210 07:44:08.624922  407117 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:44:08.624979  407117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:44:08.650031  407117 cri.go:89] found id: ""
	I1210 07:44:08.650045  407117 logs.go:282] 0 containers: []
	W1210 07:44:08.650051  407117 logs.go:284] No container was found matching "coredns"
	I1210 07:44:08.650060  407117 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:44:08.650118  407117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:44:08.693206  407117 cri.go:89] found id: ""
	I1210 07:44:08.693219  407117 logs.go:282] 0 containers: []
	W1210 07:44:08.693226  407117 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:44:08.693231  407117 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:44:08.693290  407117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:44:08.721320  407117 cri.go:89] found id: ""
	I1210 07:44:08.721336  407117 logs.go:282] 0 containers: []
	W1210 07:44:08.721343  407117 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:44:08.721348  407117 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:44:08.721420  407117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:44:08.754129  407117 cri.go:89] found id: ""
	I1210 07:44:08.754144  407117 logs.go:282] 0 containers: []
	W1210 07:44:08.754152  407117 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:44:08.754157  407117 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:44:08.754220  407117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:44:08.780408  407117 cri.go:89] found id: ""
	I1210 07:44:08.780424  407117 logs.go:282] 0 containers: []
	W1210 07:44:08.780431  407117 logs.go:284] No container was found matching "kindnet"
	I1210 07:44:08.780440  407117 logs.go:123] Gathering logs for kubelet ...
	I1210 07:44:08.780451  407117 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:44:08.846646  407117 logs.go:123] Gathering logs for dmesg ...
	I1210 07:44:08.846664  407117 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:44:08.860703  407117 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:44:08.860718  407117 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:44:08.926178  407117 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:44:08.917564    4842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:44:08.918385    4842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:44:08.919942    4842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:44:08.920517    4842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:44:08.922145    4842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:44:08.917564    4842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:44:08.918385    4842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:44:08.919942    4842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:44:08.920517    4842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:44:08.922145    4842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:44:08.926189  407117 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:44:08.926200  407117 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:44:08.957956  407117 logs.go:123] Gathering logs for container status ...
	I1210 07:44:08.957976  407117 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 07:44:08.987326  407117 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000243388s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 07:44:08.987367  407117 out.go:285] * 
	W1210 07:44:08.987426  407117 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000243388s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:44:08.987441  407117 out.go:285] * 
	W1210 07:44:08.989804  407117 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:44:08.995124  407117 out.go:203] 
	W1210 07:44:08.997889  407117 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000243388s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 07:44:08.997945  407117 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 07:44:08.997965  407117 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 07:44:09.001527  407117 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 07:35:59 functional-314220 crio[842]: time="2025-12-10T07:35:59.972368234Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 10 07:35:59 functional-314220 crio[842]: time="2025-12-10T07:35:59.972406143Z" level=info msg="Starting seccomp notifier watcher"
	Dec 10 07:35:59 functional-314220 crio[842]: time="2025-12-10T07:35:59.972458213Z" level=info msg="Create NRI interface"
	Dec 10 07:35:59 functional-314220 crio[842]: time="2025-12-10T07:35:59.972586643Z" level=info msg="built-in NRI default validator is disabled"
	Dec 10 07:35:59 functional-314220 crio[842]: time="2025-12-10T07:35:59.972607747Z" level=info msg="runtime interface created"
	Dec 10 07:35:59 functional-314220 crio[842]: time="2025-12-10T07:35:59.972621909Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 10 07:35:59 functional-314220 crio[842]: time="2025-12-10T07:35:59.972632675Z" level=info msg="runtime interface starting up..."
	Dec 10 07:35:59 functional-314220 crio[842]: time="2025-12-10T07:35:59.972642742Z" level=info msg="starting plugins..."
	Dec 10 07:35:59 functional-314220 crio[842]: time="2025-12-10T07:35:59.972656404Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 07:35:59 functional-314220 crio[842]: time="2025-12-10T07:35:59.972722301Z" level=info msg="No systemd watchdog enabled"
	Dec 10 07:35:59 functional-314220 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 10 07:36:02 functional-314220 crio[842]: time="2025-12-10T07:36:02.535649008Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=9ee81cf8-253b-4848-bb59-f98c93c16040 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:36:02 functional-314220 crio[842]: time="2025-12-10T07:36:02.536460988Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=ad24a25b-ebde-4290-9997-357bf802f177 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:36:02 functional-314220 crio[842]: time="2025-12-10T07:36:02.537016946Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4ec80a9f-f0c2-488f-834d-7388a7e30e91 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:36:02 functional-314220 crio[842]: time="2025-12-10T07:36:02.537532764Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=4fedbde3-9576-4d11-976b-13b189892528 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:36:02 functional-314220 crio[842]: time="2025-12-10T07:36:02.538011206Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=eeadebb8-f2f3-4745-b548-c35e49bbd16f name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:36:02 functional-314220 crio[842]: time="2025-12-10T07:36:02.538545691Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=98e6339e-a565-458e-9bfd-ecf98f0fdc20 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:36:02 functional-314220 crio[842]: time="2025-12-10T07:36:02.53931333Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=5ef52cf9-c0ab-45ff-a371-2e2be6f4396b name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:40:07 functional-314220 crio[842]: time="2025-12-10T07:40:07.277017981Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=40c9a218-5ef0-44fc-b745-44c87a675976 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:40:07 functional-314220 crio[842]: time="2025-12-10T07:40:07.277904888Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=d22e275b-03ea-4010-a5de-e4aa5e0f1425 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:40:07 functional-314220 crio[842]: time="2025-12-10T07:40:07.278402702Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=57d37a94-a7ae-40be-8e9e-7518c5204860 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:40:07 functional-314220 crio[842]: time="2025-12-10T07:40:07.278842233Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=83962070-7a35-4b2a-aa6c-d8727f81f0ce name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:40:07 functional-314220 crio[842]: time="2025-12-10T07:40:07.279551521Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=8f4b6804-6baa-43bb-8950-24ca545d448e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:40:07 functional-314220 crio[842]: time="2025-12-10T07:40:07.28000199Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=9b05afef-4517-42d7-921c-ee16fc7d6208 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:40:07 functional-314220 crio[842]: time="2025-12-10T07:40:07.280424726Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=6bf985e4-2661-445c-a1c7-84a3ebd1e698 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:44:09.977545    4956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:44:09.978041    4956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:44:09.979857    4956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:44:09.980554    4956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:44:09.982040    4956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	[Dec10 07:11] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:23] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:25] overlayfs: idmapped layers are currently not supported
	[  +0.065949] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 07:31] overlayfs: idmapped layers are currently not supported
	[Dec10 07:32] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 07:44:10 up  2:26,  0 user,  load average: 0.05, 0.48, 1.11
	Linux functional-314220 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:44:07 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:44:07 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 646.
	Dec 10 07:44:07 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:44:07 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:44:07 functional-314220 kubelet[4770]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:44:07 functional-314220 kubelet[4770]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:44:07 functional-314220 kubelet[4770]: E1210 07:44:07.954029    4770 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:44:07 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:44:07 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:44:08 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 647.
	Dec 10 07:44:08 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:44:08 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:44:08 functional-314220 kubelet[4804]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:44:08 functional-314220 kubelet[4804]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:44:08 functional-314220 kubelet[4804]: E1210 07:44:08.734996    4804 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:44:08 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:44:08 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:44:09 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 648.
	Dec 10 07:44:09 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:44:09 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:44:09 functional-314220 kubelet[4875]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:44:09 functional-314220 kubelet[4875]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:44:09 functional-314220 kubelet[4875]: E1210 07:44:09.477887    4875 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:44:09 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:44:09 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220: exit status 6 (321.015292ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 07:44:10.428409  412879 status.go:458] kubeconfig endpoint: get endpoint: "functional-314220" does not appear in /home/jenkins/minikube-integration/22089-376671/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-314220" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (501.79s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1210 07:44:10.444472  378528 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-314220 --alsologtostderr -v=8
E1210 07:45:04.883190  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:45:32.584649  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:47:27.800660  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:48:50.871987  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:50:04.883146  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-314220 --alsologtostderr -v=8: exit status 80 (6m5.243055174s)

                                                
                                                
-- stdout --
	* [functional-314220] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-314220" primary control-plane node in "functional-314220" cluster
	* Pulling base image v0.0.48-1765319469-22089 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:44:10.487397  412953 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:44:10.487521  412953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:44:10.487566  412953 out.go:374] Setting ErrFile to fd 2...
	I1210 07:44:10.487572  412953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:44:10.487834  412953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:44:10.488205  412953 out.go:368] Setting JSON to false
	I1210 07:44:10.489052  412953 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8801,"bootTime":1765343850,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:44:10.489127  412953 start.go:143] virtualization:  
	I1210 07:44:10.492628  412953 out.go:179] * [functional-314220] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:44:10.495451  412953 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:44:10.495581  412953 notify.go:221] Checking for updates...
	I1210 07:44:10.501282  412953 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:44:10.504171  412953 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:10.506968  412953 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 07:44:10.509885  412953 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:44:10.512742  412953 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:44:10.516079  412953 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:44:10.516221  412953 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:44:10.539133  412953 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:44:10.539253  412953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:44:10.606789  412953 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:44:10.597593273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:44:10.606896  412953 docker.go:319] overlay module found
	I1210 07:44:10.611915  412953 out.go:179] * Using the docker driver based on existing profile
	I1210 07:44:10.614862  412953 start.go:309] selected driver: docker
	I1210 07:44:10.614885  412953 start.go:927] validating driver "docker" against &{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:44:10.614994  412953 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:44:10.615113  412953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:44:10.673141  412953 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:44:10.664474897 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:44:10.673572  412953 cni.go:84] Creating CNI manager for ""
	I1210 07:44:10.673631  412953 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:44:10.673679  412953 start.go:353] cluster config:
	{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:44:10.678679  412953 out.go:179] * Starting "functional-314220" primary control-plane node in "functional-314220" cluster
	I1210 07:44:10.681372  412953 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 07:44:10.684277  412953 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:44:10.687267  412953 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:44:10.687329  412953 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1210 07:44:10.687343  412953 cache.go:65] Caching tarball of preloaded images
	I1210 07:44:10.687350  412953 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:44:10.687434  412953 preload.go:238] Found /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1210 07:44:10.687444  412953 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 07:44:10.687550  412953 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/config.json ...
	I1210 07:44:10.707132  412953 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:44:10.707156  412953 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:44:10.707176  412953 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:44:10.707214  412953 start.go:360] acquireMachinesLock for functional-314220: {Name:mk24b69a1578b6f8eb2fecd1b5e1ddb99787d8b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:44:10.707283  412953 start.go:364] duration metric: took 45.104µs to acquireMachinesLock for "functional-314220"
	I1210 07:44:10.707306  412953 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:44:10.707317  412953 fix.go:54] fixHost starting: 
	I1210 07:44:10.707577  412953 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:44:10.723920  412953 fix.go:112] recreateIfNeeded on functional-314220: state=Running err=<nil>
	W1210 07:44:10.723951  412953 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:44:10.727176  412953 out.go:252] * Updating the running docker "functional-314220" container ...
	I1210 07:44:10.727205  412953 machine.go:94] provisionDockerMachine start ...
	I1210 07:44:10.727283  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:10.744553  412953 main.go:143] libmachine: Using SSH client type: native
	I1210 07:44:10.744931  412953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:44:10.744946  412953 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:44:10.878742  412953 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-314220
	
	I1210 07:44:10.878763  412953 ubuntu.go:182] provisioning hostname "functional-314220"
	I1210 07:44:10.878828  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:10.897712  412953 main.go:143] libmachine: Using SSH client type: native
	I1210 07:44:10.898057  412953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:44:10.898077  412953 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-314220 && echo "functional-314220" | sudo tee /etc/hostname
	I1210 07:44:11.052065  412953 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-314220
	
	I1210 07:44:11.052160  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:11.072344  412953 main.go:143] libmachine: Using SSH client type: native
	I1210 07:44:11.072686  412953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:44:11.072703  412953 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-314220' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-314220/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-314220' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:44:11.207289  412953 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:44:11.207317  412953 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-376671/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-376671/.minikube}
	I1210 07:44:11.207348  412953 ubuntu.go:190] setting up certificates
	I1210 07:44:11.207366  412953 provision.go:84] configureAuth start
	I1210 07:44:11.207429  412953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:44:11.224935  412953 provision.go:143] copyHostCerts
	I1210 07:44:11.224978  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem
	I1210 07:44:11.225021  412953 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem, removing ...
	I1210 07:44:11.225032  412953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem
	I1210 07:44:11.225107  412953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem (1082 bytes)
	I1210 07:44:11.225201  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem
	I1210 07:44:11.225224  412953 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem, removing ...
	I1210 07:44:11.225234  412953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem
	I1210 07:44:11.225268  412953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem (1123 bytes)
	I1210 07:44:11.225321  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem
	I1210 07:44:11.225345  412953 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem, removing ...
	I1210 07:44:11.225354  412953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem
	I1210 07:44:11.225380  412953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem (1675 bytes)
	I1210 07:44:11.225441  412953 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem org=jenkins.functional-314220 san=[127.0.0.1 192.168.49.2 functional-314220 localhost minikube]
	I1210 07:44:11.417392  412953 provision.go:177] copyRemoteCerts
	I1210 07:44:11.417460  412953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:44:11.417497  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:11.436410  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:11.535532  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 07:44:11.535603  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:44:11.553463  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 07:44:11.553526  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:44:11.571834  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 07:44:11.571892  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:44:11.590409  412953 provision.go:87] duration metric: took 383.016251ms to configureAuth
	I1210 07:44:11.590435  412953 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:44:11.590614  412953 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:44:11.590731  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:11.608257  412953 main.go:143] libmachine: Using SSH client type: native
	I1210 07:44:11.608571  412953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:44:11.608596  412953 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 07:44:11.906129  412953 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 07:44:11.906170  412953 machine.go:97] duration metric: took 1.17895657s to provisionDockerMachine
	I1210 07:44:11.906181  412953 start.go:293] postStartSetup for "functional-314220" (driver="docker")
	I1210 07:44:11.906194  412953 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:44:11.906264  412953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:44:11.906303  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:11.923285  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:12.019543  412953 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:44:12.023176  412953 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 07:44:12.023203  412953 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 07:44:12.023208  412953 command_runner.go:130] > VERSION_ID="12"
	I1210 07:44:12.023217  412953 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 07:44:12.023222  412953 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 07:44:12.023226  412953 command_runner.go:130] > ID=debian
	I1210 07:44:12.023231  412953 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 07:44:12.023236  412953 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 07:44:12.023245  412953 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 07:44:12.023295  412953 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:44:12.023316  412953 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:44:12.023330  412953 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/addons for local assets ...
	I1210 07:44:12.023386  412953 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/files for local assets ...
	I1210 07:44:12.023472  412953 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> 3785282.pem in /etc/ssl/certs
	I1210 07:44:12.023483  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> /etc/ssl/certs/3785282.pem
	I1210 07:44:12.023563  412953 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts -> hosts in /etc/test/nested/copy/378528
	I1210 07:44:12.023571  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts -> /etc/test/nested/copy/378528/hosts
	I1210 07:44:12.023617  412953 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/378528
	I1210 07:44:12.031659  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 07:44:12.049814  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts --> /etc/test/nested/copy/378528/hosts (40 bytes)
	I1210 07:44:12.067644  412953 start.go:296] duration metric: took 161.447867ms for postStartSetup
	I1210 07:44:12.067748  412953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:44:12.067798  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:12.084856  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:12.184547  412953 command_runner.go:130] > 14%
	I1210 07:44:12.184639  412953 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:44:12.189562  412953 command_runner.go:130] > 169G
	I1210 07:44:12.189589  412953 fix.go:56] duration metric: took 1.4822703s for fixHost
	I1210 07:44:12.189600  412953 start.go:83] releasing machines lock for "functional-314220", held for 1.482305303s
	I1210 07:44:12.189668  412953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:44:12.206193  412953 ssh_runner.go:195] Run: cat /version.json
	I1210 07:44:12.206242  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:12.206484  412953 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:44:12.206547  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:12.229509  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:12.231766  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:12.322395  412953 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765319469-22089", "minikube_version": "v1.37.0", "commit": "3b564f551de69272c9de22efc5b37f8a5b0156c7"}
	I1210 07:44:12.322525  412953 ssh_runner.go:195] Run: systemctl --version
	I1210 07:44:12.409743  412953 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1210 07:44:12.412779  412953 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 07:44:12.412818  412953 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 07:44:12.412894  412953 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 07:44:12.460937  412953 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 07:44:12.466609  412953 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 07:44:12.466697  412953 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:44:12.466802  412953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:44:12.474626  412953 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:44:12.474651  412953 start.go:496] detecting cgroup driver to use...
	I1210 07:44:12.474708  412953 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:44:12.474780  412953 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:44:12.490092  412953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:44:12.503562  412953 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:44:12.503627  412953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:44:12.518840  412953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:44:12.531838  412953 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:44:12.642559  412953 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:44:12.762873  412953 docker.go:234] disabling docker service ...
	I1210 07:44:12.762979  412953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:44:12.778725  412953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:44:12.791652  412953 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:44:12.911705  412953 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:44:13.035394  412953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:44:13.049695  412953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:44:13.065431  412953 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1210 07:44:13.065522  412953 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 07:44:13.065609  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.075381  412953 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 07:44:13.075482  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.085452  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.094855  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.104471  412953 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:44:13.112786  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.121728  412953 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.130205  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.139248  412953 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:44:13.145900  412953 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 07:44:13.147163  412953 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:44:13.154995  412953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:44:13.289205  412953 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 07:44:13.445871  412953 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 07:44:13.446002  412953 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 07:44:13.449677  412953 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1210 07:44:13.449750  412953 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 07:44:13.449774  412953 command_runner.go:130] > Device: 0,72	Inode: 1639        Links: 1
	I1210 07:44:13.449787  412953 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 07:44:13.449793  412953 command_runner.go:130] > Access: 2025-12-10 07:44:13.388138306 +0000
	I1210 07:44:13.449816  412953 command_runner.go:130] > Modify: 2025-12-10 07:44:13.388138306 +0000
	I1210 07:44:13.449826  412953 command_runner.go:130] > Change: 2025-12-10 07:44:13.388138306 +0000
	I1210 07:44:13.449830  412953 command_runner.go:130] >  Birth: -
	I1210 07:44:13.449864  412953 start.go:564] Will wait 60s for crictl version
	I1210 07:44:13.449928  412953 ssh_runner.go:195] Run: which crictl
	I1210 07:44:13.453538  412953 command_runner.go:130] > /usr/local/bin/crictl
	I1210 07:44:13.453678  412953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:44:13.477475  412953 command_runner.go:130] > Version:  0.1.0
	I1210 07:44:13.477498  412953 command_runner.go:130] > RuntimeName:  cri-o
	I1210 07:44:13.477503  412953 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1210 07:44:13.477509  412953 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 07:44:13.477520  412953 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 07:44:13.477602  412953 ssh_runner.go:195] Run: crio --version
	I1210 07:44:13.505751  412953 command_runner.go:130] > crio version 1.34.3
	I1210 07:44:13.505796  412953 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1210 07:44:13.505803  412953 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1210 07:44:13.505808  412953 command_runner.go:130] >    GitTreeState:   dirty
	I1210 07:44:13.505813  412953 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1210 07:44:13.505817  412953 command_runner.go:130] >    GoVersion:      go1.24.6
	I1210 07:44:13.505821  412953 command_runner.go:130] >    Compiler:       gc
	I1210 07:44:13.505826  412953 command_runner.go:130] >    Platform:       linux/arm64
	I1210 07:44:13.505835  412953 command_runner.go:130] >    Linkmode:       static
	I1210 07:44:13.505838  412953 command_runner.go:130] >    BuildTags:
	I1210 07:44:13.505844  412953 command_runner.go:130] >      static
	I1210 07:44:13.505848  412953 command_runner.go:130] >      netgo
	I1210 07:44:13.505852  412953 command_runner.go:130] >      osusergo
	I1210 07:44:13.505859  412953 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1210 07:44:13.505863  412953 command_runner.go:130] >      seccomp
	I1210 07:44:13.505874  412953 command_runner.go:130] >      apparmor
	I1210 07:44:13.505877  412953 command_runner.go:130] >      selinux
	I1210 07:44:13.505881  412953 command_runner.go:130] >    LDFlags:          unknown
	I1210 07:44:13.505886  412953 command_runner.go:130] >    SeccompEnabled:   true
	I1210 07:44:13.505895  412953 command_runner.go:130] >    AppArmorEnabled:  false
	I1210 07:44:13.507701  412953 ssh_runner.go:195] Run: crio --version
	I1210 07:44:13.535170  412953 command_runner.go:130] > crio version 1.34.3
	I1210 07:44:13.535233  412953 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1210 07:44:13.535254  412953 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1210 07:44:13.535275  412953 command_runner.go:130] >    GitTreeState:   dirty
	I1210 07:44:13.535296  412953 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1210 07:44:13.535314  412953 command_runner.go:130] >    GoVersion:      go1.24.6
	I1210 07:44:13.535334  412953 command_runner.go:130] >    Compiler:       gc
	I1210 07:44:13.535358  412953 command_runner.go:130] >    Platform:       linux/arm64
	I1210 07:44:13.535377  412953 command_runner.go:130] >    Linkmode:       static
	I1210 07:44:13.535395  412953 command_runner.go:130] >    BuildTags:
	I1210 07:44:13.535414  412953 command_runner.go:130] >      static
	I1210 07:44:13.535432  412953 command_runner.go:130] >      netgo
	I1210 07:44:13.535451  412953 command_runner.go:130] >      osusergo
	I1210 07:44:13.535471  412953 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1210 07:44:13.535489  412953 command_runner.go:130] >      seccomp
	I1210 07:44:13.535518  412953 command_runner.go:130] >      apparmor
	I1210 07:44:13.535548  412953 command_runner.go:130] >      selinux
	I1210 07:44:13.535566  412953 command_runner.go:130] >    LDFlags:          unknown
	I1210 07:44:13.535590  412953 command_runner.go:130] >    SeccompEnabled:   true
	I1210 07:44:13.535609  412953 command_runner.go:130] >    AppArmorEnabled:  false
	I1210 07:44:13.540516  412953 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 07:44:13.543340  412953 cli_runner.go:164] Run: docker network inspect functional-314220 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:44:13.558881  412953 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 07:44:13.562785  412953 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1210 07:44:13.562964  412953 kubeadm.go:884] updating cluster {Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:44:13.563103  412953 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:44:13.563170  412953 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:44:13.592036  412953 command_runner.go:130] > {
	I1210 07:44:13.592059  412953 command_runner.go:130] >   "images":  [
	I1210 07:44:13.592064  412953 command_runner.go:130] >     {
	I1210 07:44:13.592073  412953 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1210 07:44:13.592083  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592089  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1210 07:44:13.592093  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592096  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592118  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1210 07:44:13.592130  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1210 07:44:13.592138  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592144  412953 command_runner.go:130] >       "size":  "111333938",
	I1210 07:44:13.592154  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592159  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592163  412953 command_runner.go:130] >     },
	I1210 07:44:13.592169  412953 command_runner.go:130] >     {
	I1210 07:44:13.592176  412953 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1210 07:44:13.592183  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592189  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 07:44:13.592192  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592196  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592207  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1210 07:44:13.592217  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1210 07:44:13.592221  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592225  412953 command_runner.go:130] >       "size":  "29037500",
	I1210 07:44:13.592231  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592239  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592246  412953 command_runner.go:130] >     },
	I1210 07:44:13.592249  412953 command_runner.go:130] >     {
	I1210 07:44:13.592255  412953 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 07:44:13.592264  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592269  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 07:44:13.592272  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592278  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592286  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1210 07:44:13.592297  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1210 07:44:13.592300  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592306  412953 command_runner.go:130] >       "size":  "74491780",
	I1210 07:44:13.592311  412953 command_runner.go:130] >       "username":  "nonroot",
	I1210 07:44:13.592317  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592320  412953 command_runner.go:130] >     },
	I1210 07:44:13.592329  412953 command_runner.go:130] >     {
	I1210 07:44:13.592338  412953 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1210 07:44:13.592342  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592354  412953 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1210 07:44:13.592357  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592361  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592374  412953 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1210 07:44:13.592381  412953 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1210 07:44:13.592387  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592391  412953 command_runner.go:130] >       "size":  "60857170",
	I1210 07:44:13.592395  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592401  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.592405  412953 command_runner.go:130] >       },
	I1210 07:44:13.592420  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592424  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592429  412953 command_runner.go:130] >     },
	I1210 07:44:13.592433  412953 command_runner.go:130] >     {
	I1210 07:44:13.592446  412953 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1210 07:44:13.592450  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592457  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1210 07:44:13.592461  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592465  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592474  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1210 07:44:13.592484  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1210 07:44:13.592488  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592494  412953 command_runner.go:130] >       "size":  "84949999",
	I1210 07:44:13.592498  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592522  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.592525  412953 command_runner.go:130] >       },
	I1210 07:44:13.592530  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592538  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592541  412953 command_runner.go:130] >     },
	I1210 07:44:13.592545  412953 command_runner.go:130] >     {
	I1210 07:44:13.592556  412953 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1210 07:44:13.592563  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592569  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1210 07:44:13.592579  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592582  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592591  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1210 07:44:13.592602  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1210 07:44:13.592606  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592616  412953 command_runner.go:130] >       "size":  "72170325",
	I1210 07:44:13.592619  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592623  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.592628  412953 command_runner.go:130] >       },
	I1210 07:44:13.592633  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592639  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592642  412953 command_runner.go:130] >     },
	I1210 07:44:13.592645  412953 command_runner.go:130] >     {
	I1210 07:44:13.592652  412953 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1210 07:44:13.592663  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592669  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1210 07:44:13.592674  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592678  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592691  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1210 07:44:13.592702  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1210 07:44:13.592706  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592712  412953 command_runner.go:130] >       "size":  "74106775",
	I1210 07:44:13.592717  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592723  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592726  412953 command_runner.go:130] >     },
	I1210 07:44:13.592729  412953 command_runner.go:130] >     {
	I1210 07:44:13.592735  412953 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1210 07:44:13.592741  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592747  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1210 07:44:13.592750  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592764  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592772  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1210 07:44:13.592793  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1210 07:44:13.592800  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592804  412953 command_runner.go:130] >       "size":  "49822549",
	I1210 07:44:13.592808  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592817  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.592820  412953 command_runner.go:130] >       },
	I1210 07:44:13.592824  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592830  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592834  412953 command_runner.go:130] >     },
	I1210 07:44:13.592843  412953 command_runner.go:130] >     {
	I1210 07:44:13.592849  412953 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 07:44:13.592853  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592858  412953 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 07:44:13.592866  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592870  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592878  412953 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1210 07:44:13.592888  412953 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1210 07:44:13.592892  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592898  412953 command_runner.go:130] >       "size":  "519884",
	I1210 07:44:13.592902  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592911  412953 command_runner.go:130] >         "value":  "65535"
	I1210 07:44:13.592914  412953 command_runner.go:130] >       },
	I1210 07:44:13.592918  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592924  412953 command_runner.go:130] >       "pinned":  true
	I1210 07:44:13.592927  412953 command_runner.go:130] >     }
	I1210 07:44:13.592932  412953 command_runner.go:130] >   ]
	I1210 07:44:13.592935  412953 command_runner.go:130] > }
	I1210 07:44:13.595219  412953 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:44:13.595245  412953 crio.go:433] Images already preloaded, skipping extraction
	I1210 07:44:13.595305  412953 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:44:13.620833  412953 command_runner.go:130] > {
	I1210 07:44:13.620851  412953 command_runner.go:130] >   "images":  [
	I1210 07:44:13.620856  412953 command_runner.go:130] >     {
	I1210 07:44:13.620865  412953 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1210 07:44:13.620870  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.620884  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1210 07:44:13.620888  412953 command_runner.go:130] >       ],
	I1210 07:44:13.620896  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.620905  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1210 07:44:13.620913  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1210 07:44:13.620917  412953 command_runner.go:130] >       ],
	I1210 07:44:13.620921  412953 command_runner.go:130] >       "size":  "111333938",
	I1210 07:44:13.620925  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.620930  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.620933  412953 command_runner.go:130] >     },
	I1210 07:44:13.620936  412953 command_runner.go:130] >     {
	I1210 07:44:13.620943  412953 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1210 07:44:13.620947  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.620952  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 07:44:13.620955  412953 command_runner.go:130] >       ],
	I1210 07:44:13.620958  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.620966  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1210 07:44:13.620975  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1210 07:44:13.620978  412953 command_runner.go:130] >       ],
	I1210 07:44:13.620982  412953 command_runner.go:130] >       "size":  "29037500",
	I1210 07:44:13.620985  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.620991  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.620994  412953 command_runner.go:130] >     },
	I1210 07:44:13.620997  412953 command_runner.go:130] >     {
	I1210 07:44:13.621003  412953 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 07:44:13.621007  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621012  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 07:44:13.621015  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621019  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621027  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1210 07:44:13.621035  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1210 07:44:13.621038  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621042  412953 command_runner.go:130] >       "size":  "74491780",
	I1210 07:44:13.621046  412953 command_runner.go:130] >       "username":  "nonroot",
	I1210 07:44:13.621049  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621056  412953 command_runner.go:130] >     },
	I1210 07:44:13.621059  412953 command_runner.go:130] >     {
	I1210 07:44:13.621066  412953 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1210 07:44:13.621070  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621075  412953 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1210 07:44:13.621079  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621083  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621091  412953 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1210 07:44:13.621098  412953 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1210 07:44:13.621102  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621105  412953 command_runner.go:130] >       "size":  "60857170",
	I1210 07:44:13.621109  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621113  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.621116  412953 command_runner.go:130] >       },
	I1210 07:44:13.621124  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621128  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621131  412953 command_runner.go:130] >     },
	I1210 07:44:13.621134  412953 command_runner.go:130] >     {
	I1210 07:44:13.621143  412953 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1210 07:44:13.621147  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621152  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1210 07:44:13.621156  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621159  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621167  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1210 07:44:13.621175  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1210 07:44:13.621178  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621182  412953 command_runner.go:130] >       "size":  "84949999",
	I1210 07:44:13.621185  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621189  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.621192  412953 command_runner.go:130] >       },
	I1210 07:44:13.621196  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621199  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621202  412953 command_runner.go:130] >     },
	I1210 07:44:13.621208  412953 command_runner.go:130] >     {
	I1210 07:44:13.621214  412953 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1210 07:44:13.621218  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621224  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1210 07:44:13.621227  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621231  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621239  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1210 07:44:13.621247  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1210 07:44:13.621250  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621255  412953 command_runner.go:130] >       "size":  "72170325",
	I1210 07:44:13.621258  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621262  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.621265  412953 command_runner.go:130] >       },
	I1210 07:44:13.621268  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621272  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621275  412953 command_runner.go:130] >     },
	I1210 07:44:13.621278  412953 command_runner.go:130] >     {
	I1210 07:44:13.621285  412953 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1210 07:44:13.621289  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621294  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1210 07:44:13.621297  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621301  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621309  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1210 07:44:13.621317  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1210 07:44:13.621320  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621324  412953 command_runner.go:130] >       "size":  "74106775",
	I1210 07:44:13.621327  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621331  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621334  412953 command_runner.go:130] >     },
	I1210 07:44:13.621337  412953 command_runner.go:130] >     {
	I1210 07:44:13.621343  412953 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1210 07:44:13.621347  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621352  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1210 07:44:13.621359  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621363  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621371  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1210 07:44:13.621390  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1210 07:44:13.621393  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621397  412953 command_runner.go:130] >       "size":  "49822549",
	I1210 07:44:13.621401  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621404  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.621408  412953 command_runner.go:130] >       },
	I1210 07:44:13.621411  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621415  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621418  412953 command_runner.go:130] >     },
	I1210 07:44:13.621421  412953 command_runner.go:130] >     {
	I1210 07:44:13.621427  412953 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 07:44:13.621431  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621435  412953 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 07:44:13.621438  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621442  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621449  412953 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1210 07:44:13.621456  412953 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1210 07:44:13.621459  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621463  412953 command_runner.go:130] >       "size":  "519884",
	I1210 07:44:13.621466  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621470  412953 command_runner.go:130] >         "value":  "65535"
	I1210 07:44:13.621473  412953 command_runner.go:130] >       },
	I1210 07:44:13.621477  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621481  412953 command_runner.go:130] >       "pinned":  true
	I1210 07:44:13.621483  412953 command_runner.go:130] >     }
	I1210 07:44:13.621486  412953 command_runner.go:130] >   ]
	I1210 07:44:13.621490  412953 command_runner.go:130] > }
	I1210 07:44:13.622855  412953 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:44:13.622877  412953 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:44:13.622884  412953 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1210 07:44:13.622995  412953 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-314220 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:44:13.623104  412953 ssh_runner.go:195] Run: crio config
	I1210 07:44:13.670610  412953 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1210 07:44:13.670640  412953 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1210 07:44:13.670648  412953 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1210 07:44:13.670652  412953 command_runner.go:130] > #
	I1210 07:44:13.670659  412953 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1210 07:44:13.670667  412953 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1210 07:44:13.670674  412953 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1210 07:44:13.670691  412953 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1210 07:44:13.670699  412953 command_runner.go:130] > # reload'.
	I1210 07:44:13.670706  412953 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1210 07:44:13.670713  412953 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1210 07:44:13.670722  412953 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1210 07:44:13.670728  412953 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1210 07:44:13.670733  412953 command_runner.go:130] > [crio]
	I1210 07:44:13.670747  412953 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1210 07:44:13.670755  412953 command_runner.go:130] > # containers images, in this directory.
	I1210 07:44:13.670764  412953 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1210 07:44:13.670774  412953 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1210 07:44:13.670784  412953 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1210 07:44:13.670792  412953 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1210 07:44:13.670799  412953 command_runner.go:130] > # imagestore = ""
	I1210 07:44:13.670805  412953 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1210 07:44:13.670812  412953 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1210 07:44:13.670819  412953 command_runner.go:130] > # storage_driver = "overlay"
	I1210 07:44:13.670826  412953 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1210 07:44:13.670832  412953 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1210 07:44:13.670839  412953 command_runner.go:130] > # storage_option = [
	I1210 07:44:13.670842  412953 command_runner.go:130] > # ]
	I1210 07:44:13.670848  412953 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1210 07:44:13.670854  412953 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1210 07:44:13.670864  412953 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1210 07:44:13.670876  412953 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1210 07:44:13.670886  412953 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1210 07:44:13.670890  412953 command_runner.go:130] > # always happen on a node reboot
	I1210 07:44:13.670897  412953 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1210 07:44:13.670908  412953 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1210 07:44:13.670916  412953 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1210 07:44:13.670921  412953 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1210 07:44:13.670927  412953 command_runner.go:130] > # version_file_persist = ""
	I1210 07:44:13.670948  412953 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1210 07:44:13.670957  412953 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1210 07:44:13.670965  412953 command_runner.go:130] > # internal_wipe = true
	I1210 07:44:13.670973  412953 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1210 07:44:13.670982  412953 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1210 07:44:13.670986  412953 command_runner.go:130] > # internal_repair = true
	I1210 07:44:13.670992  412953 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1210 07:44:13.671000  412953 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1210 07:44:13.671005  412953 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1210 07:44:13.671033  412953 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1210 07:44:13.671041  412953 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1210 07:44:13.671047  412953 command_runner.go:130] > [crio.api]
	I1210 07:44:13.671052  412953 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1210 07:44:13.671057  412953 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1210 07:44:13.671064  412953 command_runner.go:130] > # IP address on which the stream server will listen.
	I1210 07:44:13.671297  412953 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1210 07:44:13.671315  412953 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1210 07:44:13.671322  412953 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1210 07:44:13.671326  412953 command_runner.go:130] > # stream_port = "0"
	I1210 07:44:13.671356  412953 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1210 07:44:13.671366  412953 command_runner.go:130] > # stream_enable_tls = false
	I1210 07:44:13.671373  412953 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1210 07:44:13.671558  412953 command_runner.go:130] > # stream_idle_timeout = ""
	I1210 07:44:13.671575  412953 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1210 07:44:13.671582  412953 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1210 07:44:13.671587  412953 command_runner.go:130] > # stream_tls_cert = ""
	I1210 07:44:13.671593  412953 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1210 07:44:13.671617  412953 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1210 07:44:13.671819  412953 command_runner.go:130] > # stream_tls_key = ""
	I1210 07:44:13.671835  412953 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1210 07:44:13.671853  412953 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1210 07:44:13.671864  412953 command_runner.go:130] > # automatically pick up the changes.
	I1210 07:44:13.671868  412953 command_runner.go:130] > # stream_tls_ca = ""
	I1210 07:44:13.671887  412953 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 07:44:13.671896  412953 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1210 07:44:13.671903  412953 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 07:44:13.672102  412953 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1210 07:44:13.672121  412953 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1210 07:44:13.672128  412953 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1210 07:44:13.672131  412953 command_runner.go:130] > [crio.runtime]
	I1210 07:44:13.672137  412953 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1210 07:44:13.672162  412953 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1210 07:44:13.672172  412953 command_runner.go:130] > # "nofile=1024:2048"
	I1210 07:44:13.672179  412953 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1210 07:44:13.672183  412953 command_runner.go:130] > # default_ulimits = [
	I1210 07:44:13.672188  412953 command_runner.go:130] > # ]
	I1210 07:44:13.672195  412953 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1210 07:44:13.672201  412953 command_runner.go:130] > # no_pivot = false
	I1210 07:44:13.672207  412953 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1210 07:44:13.672214  412953 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1210 07:44:13.672219  412953 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1210 07:44:13.672235  412953 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1210 07:44:13.672241  412953 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1210 07:44:13.672248  412953 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 07:44:13.672445  412953 command_runner.go:130] > # conmon = ""
	I1210 07:44:13.672461  412953 command_runner.go:130] > # Cgroup setting for conmon
	I1210 07:44:13.672469  412953 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1210 07:44:13.672473  412953 command_runner.go:130] > conmon_cgroup = "pod"
	I1210 07:44:13.672480  412953 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1210 07:44:13.672502  412953 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1210 07:44:13.672522  412953 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 07:44:13.672875  412953 command_runner.go:130] > # conmon_env = [
	I1210 07:44:13.672888  412953 command_runner.go:130] > # ]
	I1210 07:44:13.672895  412953 command_runner.go:130] > # Additional environment variables to set for all the
	I1210 07:44:13.672900  412953 command_runner.go:130] > # containers. These are overridden if set in the
	I1210 07:44:13.672907  412953 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1210 07:44:13.673114  412953 command_runner.go:130] > # default_env = [
	I1210 07:44:13.673128  412953 command_runner.go:130] > # ]
	I1210 07:44:13.673149  412953 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1210 07:44:13.673177  412953 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1210 07:44:13.673192  412953 command_runner.go:130] > # selinux = false
	I1210 07:44:13.673200  412953 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1210 07:44:13.673211  412953 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1210 07:44:13.673216  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.673222  412953 command_runner.go:130] > # seccomp_profile = ""
	I1210 07:44:13.673228  412953 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1210 07:44:13.673240  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.673428  412953 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1210 07:44:13.673444  412953 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1210 07:44:13.673452  412953 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1210 07:44:13.673459  412953 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1210 07:44:13.673478  412953 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1210 07:44:13.673488  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.673492  412953 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1210 07:44:13.673498  412953 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1210 07:44:13.673505  412953 command_runner.go:130] > # the cgroup blockio controller.
	I1210 07:44:13.673509  412953 command_runner.go:130] > # blockio_config_file = ""
	I1210 07:44:13.673515  412953 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1210 07:44:13.673522  412953 command_runner.go:130] > # blockio parameters.
	I1210 07:44:13.673725  412953 command_runner.go:130] > # blockio_reload = false
	I1210 07:44:13.673738  412953 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1210 07:44:13.673742  412953 command_runner.go:130] > # irqbalance daemon.
	I1210 07:44:13.673748  412953 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1210 07:44:13.673757  412953 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1210 07:44:13.673788  412953 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1210 07:44:13.673801  412953 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1210 07:44:13.673807  412953 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1210 07:44:13.673816  412953 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1210 07:44:13.673821  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.673830  412953 command_runner.go:130] > # rdt_config_file = ""
	I1210 07:44:13.673837  412953 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1210 07:44:13.674053  412953 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1210 07:44:13.674071  412953 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1210 07:44:13.674076  412953 command_runner.go:130] > # separate_pull_cgroup = ""
	I1210 07:44:13.674083  412953 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1210 07:44:13.674102  412953 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1210 07:44:13.674116  412953 command_runner.go:130] > # will be added.
	I1210 07:44:13.674121  412953 command_runner.go:130] > # default_capabilities = [
	I1210 07:44:13.674343  412953 command_runner.go:130] > # 	"CHOWN",
	I1210 07:44:13.674352  412953 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1210 07:44:13.674356  412953 command_runner.go:130] > # 	"FSETID",
	I1210 07:44:13.674359  412953 command_runner.go:130] > # 	"FOWNER",
	I1210 07:44:13.674363  412953 command_runner.go:130] > # 	"SETGID",
	I1210 07:44:13.674366  412953 command_runner.go:130] > # 	"SETUID",
	I1210 07:44:13.674423  412953 command_runner.go:130] > # 	"SETPCAP",
	I1210 07:44:13.674435  412953 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1210 07:44:13.674593  412953 command_runner.go:130] > # 	"KILL",
	I1210 07:44:13.674604  412953 command_runner.go:130] > # ]
	I1210 07:44:13.674621  412953 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1210 07:44:13.674632  412953 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1210 07:44:13.674812  412953 command_runner.go:130] > # add_inheritable_capabilities = false
	I1210 07:44:13.674829  412953 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1210 07:44:13.674836  412953 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 07:44:13.674844  412953 command_runner.go:130] > default_sysctls = [
	I1210 07:44:13.674849  412953 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1210 07:44:13.674855  412953 command_runner.go:130] > ]
	I1210 07:44:13.674860  412953 command_runner.go:130] > # List of devices on the host that a
	I1210 07:44:13.674883  412953 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1210 07:44:13.674902  412953 command_runner.go:130] > # allowed_devices = [
	I1210 07:44:13.675282  412953 command_runner.go:130] > # 	"/dev/fuse",
	I1210 07:44:13.675296  412953 command_runner.go:130] > # 	"/dev/net/tun",
	I1210 07:44:13.675300  412953 command_runner.go:130] > # ]
	I1210 07:44:13.675305  412953 command_runner.go:130] > # List of additional devices. specified as
	I1210 07:44:13.675313  412953 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1210 07:44:13.675339  412953 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1210 07:44:13.675346  412953 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 07:44:13.675350  412953 command_runner.go:130] > # additional_devices = [
	I1210 07:44:13.675524  412953 command_runner.go:130] > # ]
	I1210 07:44:13.675539  412953 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1210 07:44:13.675543  412953 command_runner.go:130] > # cdi_spec_dirs = [
	I1210 07:44:13.675549  412953 command_runner.go:130] > # 	"/etc/cdi",
	I1210 07:44:13.675552  412953 command_runner.go:130] > # 	"/var/run/cdi",
	I1210 07:44:13.675555  412953 command_runner.go:130] > # ]
	I1210 07:44:13.675562  412953 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1210 07:44:13.675584  412953 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1210 07:44:13.675594  412953 command_runner.go:130] > # Defaults to false.
	I1210 07:44:13.675951  412953 command_runner.go:130] > # device_ownership_from_security_context = false
	I1210 07:44:13.675970  412953 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1210 07:44:13.675978  412953 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1210 07:44:13.675982  412953 command_runner.go:130] > # hooks_dir = [
	I1210 07:44:13.676213  412953 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1210 07:44:13.676224  412953 command_runner.go:130] > # ]
	I1210 07:44:13.676231  412953 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1210 07:44:13.676237  412953 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1210 07:44:13.676246  412953 command_runner.go:130] > # its default mounts from the following two files:
	I1210 07:44:13.676261  412953 command_runner.go:130] > #
	I1210 07:44:13.676273  412953 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1210 07:44:13.676280  412953 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1210 07:44:13.676286  412953 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1210 07:44:13.676291  412953 command_runner.go:130] > #
	I1210 07:44:13.676298  412953 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1210 07:44:13.676304  412953 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1210 07:44:13.676313  412953 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1210 07:44:13.676318  412953 command_runner.go:130] > #      only add mounts it finds in this file.
	I1210 07:44:13.676321  412953 command_runner.go:130] > #
	I1210 07:44:13.676325  412953 command_runner.go:130] > # default_mounts_file = ""
	I1210 07:44:13.676345  412953 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1210 07:44:13.676358  412953 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1210 07:44:13.676363  412953 command_runner.go:130] > # pids_limit = -1
	I1210 07:44:13.676375  412953 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1210 07:44:13.676381  412953 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1210 07:44:13.676391  412953 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1210 07:44:13.676400  412953 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1210 07:44:13.676412  412953 command_runner.go:130] > # log_size_max = -1
	I1210 07:44:13.676423  412953 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1210 07:44:13.676626  412953 command_runner.go:130] > # log_to_journald = false
	I1210 07:44:13.676643  412953 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1210 07:44:13.676650  412953 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1210 07:44:13.676677  412953 command_runner.go:130] > # Path to directory for container attach sockets.
	I1210 07:44:13.676879  412953 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1210 07:44:13.676891  412953 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1210 07:44:13.676896  412953 command_runner.go:130] > # bind_mount_prefix = ""
	I1210 07:44:13.676903  412953 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1210 07:44:13.676909  412953 command_runner.go:130] > # read_only = false
	I1210 07:44:13.676916  412953 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1210 07:44:13.676942  412953 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1210 07:44:13.676953  412953 command_runner.go:130] > # live configuration reload.
	I1210 07:44:13.676956  412953 command_runner.go:130] > # log_level = "info"
	I1210 07:44:13.676967  412953 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1210 07:44:13.676977  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.677149  412953 command_runner.go:130] > # log_filter = ""
	I1210 07:44:13.677166  412953 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1210 07:44:13.677173  412953 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1210 07:44:13.677177  412953 command_runner.go:130] > # separated by comma.
	I1210 07:44:13.677186  412953 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 07:44:13.677212  412953 command_runner.go:130] > # uid_mappings = ""
	I1210 07:44:13.677225  412953 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1210 07:44:13.677231  412953 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1210 07:44:13.677238  412953 command_runner.go:130] > # separated by comma.
	I1210 07:44:13.677246  412953 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 07:44:13.677420  412953 command_runner.go:130] > # gid_mappings = ""
	I1210 07:44:13.677432  412953 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1210 07:44:13.677439  412953 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 07:44:13.677446  412953 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 07:44:13.677455  412953 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 07:44:13.677480  412953 command_runner.go:130] > # minimum_mappable_uid = -1
	I1210 07:44:13.677493  412953 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1210 07:44:13.677500  412953 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 07:44:13.677512  412953 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 07:44:13.677522  412953 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 07:44:13.677681  412953 command_runner.go:130] > # minimum_mappable_gid = -1
	I1210 07:44:13.677697  412953 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1210 07:44:13.677705  412953 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1210 07:44:13.677711  412953 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1210 07:44:13.677936  412953 command_runner.go:130] > # ctr_stop_timeout = 30
	I1210 07:44:13.677953  412953 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1210 07:44:13.677960  412953 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1210 07:44:13.677965  412953 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1210 07:44:13.677970  412953 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1210 07:44:13.677991  412953 command_runner.go:130] > # drop_infra_ctr = true
	I1210 07:44:13.678004  412953 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1210 07:44:13.678011  412953 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1210 07:44:13.678020  412953 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1210 07:44:13.678031  412953 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1210 07:44:13.678039  412953 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1210 07:44:13.678048  412953 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1210 07:44:13.678054  412953 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1210 07:44:13.678068  412953 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1210 07:44:13.678282  412953 command_runner.go:130] > # shared_cpuset = ""
	I1210 07:44:13.678299  412953 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1210 07:44:13.678306  412953 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1210 07:44:13.678310  412953 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1210 07:44:13.678328  412953 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1210 07:44:13.678337  412953 command_runner.go:130] > # pinns_path = ""
	I1210 07:44:13.678343  412953 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1210 07:44:13.678349  412953 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1210 07:44:13.678540  412953 command_runner.go:130] > # enable_criu_support = true
	I1210 07:44:13.678551  412953 command_runner.go:130] > # Enable/disable the generation of the container,
	I1210 07:44:13.678558  412953 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1210 07:44:13.678563  412953 command_runner.go:130] > # enable_pod_events = false
	I1210 07:44:13.678572  412953 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1210 07:44:13.678599  412953 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1210 07:44:13.678604  412953 command_runner.go:130] > # default_runtime = "crun"
	I1210 07:44:13.678609  412953 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1210 07:44:13.678622  412953 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1210 07:44:13.678632  412953 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1210 07:44:13.678642  412953 command_runner.go:130] > # creation as a file is not desired either.
	I1210 07:44:13.678651  412953 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1210 07:44:13.678663  412953 command_runner.go:130] > # the hostname is being managed dynamically.
	I1210 07:44:13.678672  412953 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1210 07:44:13.678923  412953 command_runner.go:130] > # ]
	I1210 07:44:13.678950  412953 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1210 07:44:13.678958  412953 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1210 07:44:13.678972  412953 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1210 07:44:13.678982  412953 command_runner.go:130] > # Each entry in the table should follow the format:
	I1210 07:44:13.678985  412953 command_runner.go:130] > #
	I1210 07:44:13.678990  412953 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1210 07:44:13.678995  412953 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1210 07:44:13.679001  412953 command_runner.go:130] > # runtime_type = "oci"
	I1210 07:44:13.679006  412953 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1210 07:44:13.679035  412953 command_runner.go:130] > # inherit_default_runtime = false
	I1210 07:44:13.679045  412953 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1210 07:44:13.679050  412953 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1210 07:44:13.679054  412953 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1210 07:44:13.679060  412953 command_runner.go:130] > # monitor_env = []
	I1210 07:44:13.679065  412953 command_runner.go:130] > # privileged_without_host_devices = false
	I1210 07:44:13.679069  412953 command_runner.go:130] > # allowed_annotations = []
	I1210 07:44:13.679076  412953 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1210 07:44:13.679085  412953 command_runner.go:130] > # no_sync_log = false
	I1210 07:44:13.679101  412953 command_runner.go:130] > # default_annotations = {}
	I1210 07:44:13.679107  412953 command_runner.go:130] > # stream_websockets = false
	I1210 07:44:13.679111  412953 command_runner.go:130] > # seccomp_profile = ""
	I1210 07:44:13.679142  412953 command_runner.go:130] > # Where:
	I1210 07:44:13.679152  412953 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1210 07:44:13.679158  412953 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1210 07:44:13.679174  412953 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1210 07:44:13.679188  412953 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1210 07:44:13.679194  412953 command_runner.go:130] > #   in $PATH.
	I1210 07:44:13.679200  412953 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1210 07:44:13.679207  412953 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1210 07:44:13.679213  412953 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1210 07:44:13.679219  412953 command_runner.go:130] > #   state.
	I1210 07:44:13.679225  412953 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1210 07:44:13.679231  412953 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1210 07:44:13.679240  412953 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1210 07:44:13.679252  412953 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1210 07:44:13.679260  412953 command_runner.go:130] > #   the values from the default runtime on load time.
	I1210 07:44:13.679267  412953 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1210 07:44:13.679274  412953 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1210 07:44:13.679281  412953 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1210 07:44:13.679291  412953 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1210 07:44:13.679296  412953 command_runner.go:130] > #   The currently recognized values are:
	I1210 07:44:13.679302  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1210 07:44:13.679311  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1210 07:44:13.679325  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1210 07:44:13.679338  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1210 07:44:13.679345  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1210 07:44:13.679357  412953 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1210 07:44:13.679365  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1210 07:44:13.679374  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1210 07:44:13.679380  412953 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1210 07:44:13.679398  412953 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1210 07:44:13.679409  412953 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1210 07:44:13.679420  412953 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1210 07:44:13.679430  412953 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1210 07:44:13.679436  412953 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1210 07:44:13.679445  412953 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1210 07:44:13.679452  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1210 07:44:13.679461  412953 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1210 07:44:13.679464  412953 command_runner.go:130] > #   deprecated option "conmon".
	I1210 07:44:13.679478  412953 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1210 07:44:13.679487  412953 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1210 07:44:13.679493  412953 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1210 07:44:13.679503  412953 command_runner.go:130] > #   should be moved to the container's cgroup
	I1210 07:44:13.679511  412953 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1210 07:44:13.679518  412953 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1210 07:44:13.679525  412953 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1210 07:44:13.679531  412953 command_runner.go:130] > #   conmon-rs by using:
	I1210 07:44:13.679539  412953 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1210 07:44:13.679560  412953 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1210 07:44:13.679570  412953 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1210 07:44:13.679579  412953 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1210 07:44:13.679584  412953 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1210 07:44:13.679593  412953 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1210 07:44:13.679603  412953 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1210 07:44:13.679608  412953 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1210 07:44:13.679617  412953 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1210 07:44:13.679637  412953 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1210 07:44:13.679641  412953 command_runner.go:130] > #   when a machine crash happens.
	I1210 07:44:13.679649  412953 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1210 07:44:13.679659  412953 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1210 07:44:13.679667  412953 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1210 07:44:13.679675  412953 command_runner.go:130] > #   seccomp profile for the runtime.
	I1210 07:44:13.679681  412953 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1210 07:44:13.679700  412953 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1210 07:44:13.679707  412953 command_runner.go:130] > #
	I1210 07:44:13.679712  412953 command_runner.go:130] > # Using the seccomp notifier feature:
	I1210 07:44:13.679716  412953 command_runner.go:130] > #
	I1210 07:44:13.679727  412953 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1210 07:44:13.679736  412953 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1210 07:44:13.679742  412953 command_runner.go:130] > #
	I1210 07:44:13.679749  412953 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1210 07:44:13.679756  412953 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1210 07:44:13.679761  412953 command_runner.go:130] > #
	I1210 07:44:13.679773  412953 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1210 07:44:13.679780  412953 command_runner.go:130] > # feature.
	I1210 07:44:13.679782  412953 command_runner.go:130] > #
	I1210 07:44:13.679788  412953 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1210 07:44:13.679799  412953 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1210 07:44:13.679805  412953 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1210 07:44:13.679811  412953 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1210 07:44:13.679819  412953 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1210 07:44:13.679824  412953 command_runner.go:130] > #
	I1210 07:44:13.679831  412953 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1210 07:44:13.679840  412953 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1210 07:44:13.679848  412953 command_runner.go:130] > #
	I1210 07:44:13.679858  412953 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1210 07:44:13.679864  412953 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1210 07:44:13.679869  412953 command_runner.go:130] > #
	I1210 07:44:13.679875  412953 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1210 07:44:13.679881  412953 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1210 07:44:13.679887  412953 command_runner.go:130] > # limitation.
	I1210 07:44:13.679891  412953 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1210 07:44:13.679896  412953 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1210 07:44:13.679902  412953 command_runner.go:130] > runtime_type = ""
	I1210 07:44:13.679909  412953 command_runner.go:130] > runtime_root = "/run/crun"
	I1210 07:44:13.679913  412953 command_runner.go:130] > inherit_default_runtime = false
	I1210 07:44:13.679932  412953 command_runner.go:130] > runtime_config_path = ""
	I1210 07:44:13.679940  412953 command_runner.go:130] > container_min_memory = ""
	I1210 07:44:13.679944  412953 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1210 07:44:13.679948  412953 command_runner.go:130] > monitor_cgroup = "pod"
	I1210 07:44:13.679957  412953 command_runner.go:130] > monitor_exec_cgroup = ""
	I1210 07:44:13.679961  412953 command_runner.go:130] > allowed_annotations = [
	I1210 07:44:13.680169  412953 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1210 07:44:13.680183  412953 command_runner.go:130] > ]
	I1210 07:44:13.680190  412953 command_runner.go:130] > privileged_without_host_devices = false
	I1210 07:44:13.680195  412953 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1210 07:44:13.680200  412953 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1210 07:44:13.680204  412953 command_runner.go:130] > runtime_type = ""
	I1210 07:44:13.680218  412953 command_runner.go:130] > runtime_root = "/run/runc"
	I1210 07:44:13.680228  412953 command_runner.go:130] > inherit_default_runtime = false
	I1210 07:44:13.680233  412953 command_runner.go:130] > runtime_config_path = ""
	I1210 07:44:13.680237  412953 command_runner.go:130] > container_min_memory = ""
	I1210 07:44:13.680244  412953 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1210 07:44:13.680248  412953 command_runner.go:130] > monitor_cgroup = "pod"
	I1210 07:44:13.680257  412953 command_runner.go:130] > monitor_exec_cgroup = ""
	I1210 07:44:13.680461  412953 command_runner.go:130] > privileged_without_host_devices = false
	I1210 07:44:13.680480  412953 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1210 07:44:13.680486  412953 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1210 07:44:13.680503  412953 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1210 07:44:13.680522  412953 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1210 07:44:13.680533  412953 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1210 07:44:13.680547  412953 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1210 07:44:13.680554  412953 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1210 07:44:13.680563  412953 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1210 07:44:13.680579  412953 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1210 07:44:13.680591  412953 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1210 07:44:13.680597  412953 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1210 07:44:13.680609  412953 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1210 07:44:13.680613  412953 command_runner.go:130] > # Example:
	I1210 07:44:13.680617  412953 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1210 07:44:13.680625  412953 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1210 07:44:13.680632  412953 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1210 07:44:13.680643  412953 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1210 07:44:13.680656  412953 command_runner.go:130] > # cpuset = "0-1"
	I1210 07:44:13.680660  412953 command_runner.go:130] > # cpushares = "5"
	I1210 07:44:13.680672  412953 command_runner.go:130] > # cpuquota = "1000"
	I1210 07:44:13.680676  412953 command_runner.go:130] > # cpuperiod = "100000"
	I1210 07:44:13.680680  412953 command_runner.go:130] > # cpulimit = "35"
	I1210 07:44:13.680686  412953 command_runner.go:130] > # Where:
	I1210 07:44:13.680691  412953 command_runner.go:130] > # The workload name is workload-type.
	I1210 07:44:13.680706  412953 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1210 07:44:13.680717  412953 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1210 07:44:13.680730  412953 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1210 07:44:13.680742  412953 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1210 07:44:13.680748  412953 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1210 07:44:13.680756  412953 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1210 07:44:13.680763  412953 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1210 07:44:13.680767  412953 command_runner.go:130] > # Default value is set to true
	I1210 07:44:13.681004  412953 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1210 07:44:13.681022  412953 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1210 07:44:13.681028  412953 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1210 07:44:13.681032  412953 command_runner.go:130] > # Default value is set to 'false'
	I1210 07:44:13.681046  412953 command_runner.go:130] > # disable_hostport_mapping = false
	I1210 07:44:13.681057  412953 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1210 07:44:13.681066  412953 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1210 07:44:13.681072  412953 command_runner.go:130] > # timezone = ""
	I1210 07:44:13.681078  412953 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1210 07:44:13.681082  412953 command_runner.go:130] > #
	I1210 07:44:13.681089  412953 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1210 07:44:13.681101  412953 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1210 07:44:13.681105  412953 command_runner.go:130] > [crio.image]
	I1210 07:44:13.681112  412953 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1210 07:44:13.681133  412953 command_runner.go:130] > # default_transport = "docker://"
	I1210 07:44:13.681145  412953 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1210 07:44:13.681152  412953 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1210 07:44:13.681158  412953 command_runner.go:130] > # global_auth_file = ""
	I1210 07:44:13.681163  412953 command_runner.go:130] > # The image used to instantiate infra containers.
	I1210 07:44:13.681168  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.681175  412953 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1210 07:44:13.681182  412953 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1210 07:44:13.681198  412953 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1210 07:44:13.681207  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.681403  412953 command_runner.go:130] > # pause_image_auth_file = ""
	I1210 07:44:13.681421  412953 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1210 07:44:13.681429  412953 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1210 07:44:13.681436  412953 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1210 07:44:13.681442  412953 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1210 07:44:13.681460  412953 command_runner.go:130] > # pause_command = "/pause"
	I1210 07:44:13.681466  412953 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1210 07:44:13.681473  412953 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1210 07:44:13.681481  412953 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1210 07:44:13.681487  412953 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1210 07:44:13.681495  412953 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1210 07:44:13.681508  412953 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1210 07:44:13.681512  412953 command_runner.go:130] > # pinned_images = [
	I1210 07:44:13.681700  412953 command_runner.go:130] > # ]
	I1210 07:44:13.681712  412953 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1210 07:44:13.681720  412953 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1210 07:44:13.681726  412953 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1210 07:44:13.681733  412953 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1210 07:44:13.681759  412953 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1210 07:44:13.681771  412953 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1210 07:44:13.681777  412953 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1210 07:44:13.681786  412953 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1210 07:44:13.681793  412953 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1210 07:44:13.681800  412953 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1210 07:44:13.681806  412953 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1210 07:44:13.682016  412953 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1210 07:44:13.682034  412953 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1210 07:44:13.682042  412953 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1210 07:44:13.682046  412953 command_runner.go:130] > # changing them here.
	I1210 07:44:13.682052  412953 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1210 07:44:13.682069  412953 command_runner.go:130] > # insecure_registries = [
	I1210 07:44:13.682078  412953 command_runner.go:130] > # ]
	I1210 07:44:13.682085  412953 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1210 07:44:13.682090  412953 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1210 07:44:13.682257  412953 command_runner.go:130] > # image_volumes = "mkdir"
	I1210 07:44:13.682273  412953 command_runner.go:130] > # Temporary directory to use for storing big files
	I1210 07:44:13.682285  412953 command_runner.go:130] > # big_files_temporary_dir = ""
	I1210 07:44:13.682292  412953 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1210 07:44:13.682299  412953 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1210 07:44:13.682504  412953 command_runner.go:130] > # auto_reload_registries = false
	I1210 07:44:13.682520  412953 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1210 07:44:13.682532  412953 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1210 07:44:13.682540  412953 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1210 07:44:13.682567  412953 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1210 07:44:13.682578  412953 command_runner.go:130] > # The mode of short name resolution.
	I1210 07:44:13.682585  412953 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1210 07:44:13.682595  412953 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1210 07:44:13.682600  412953 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1210 07:44:13.682615  412953 command_runner.go:130] > # short_name_mode = "enforcing"
	I1210 07:44:13.682622  412953 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1210 07:44:13.682630  412953 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1210 07:44:13.683045  412953 command_runner.go:130] > # oci_artifact_mount_support = true
	I1210 07:44:13.683063  412953 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1210 07:44:13.683080  412953 command_runner.go:130] > # CNI plugins.
	I1210 07:44:13.683084  412953 command_runner.go:130] > [crio.network]
	I1210 07:44:13.683091  412953 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1210 07:44:13.683100  412953 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1210 07:44:13.683104  412953 command_runner.go:130] > # cni_default_network = ""
	I1210 07:44:13.683110  412953 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1210 07:44:13.683116  412953 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1210 07:44:13.683122  412953 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1210 07:44:13.683126  412953 command_runner.go:130] > # plugin_dirs = [
	I1210 07:44:13.683439  412953 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1210 07:44:13.683727  412953 command_runner.go:130] > # ]
	I1210 07:44:13.683742  412953 command_runner.go:130] > # List of included pod metrics.
	I1210 07:44:13.684014  412953 command_runner.go:130] > # included_pod_metrics = [
	I1210 07:44:13.684312  412953 command_runner.go:130] > # ]
	I1210 07:44:13.684328  412953 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1210 07:44:13.684333  412953 command_runner.go:130] > [crio.metrics]
	I1210 07:44:13.684339  412953 command_runner.go:130] > # Globally enable or disable metrics support.
	I1210 07:44:13.684905  412953 command_runner.go:130] > # enable_metrics = false
	I1210 07:44:13.684921  412953 command_runner.go:130] > # Specify enabled metrics collectors.
	I1210 07:44:13.684926  412953 command_runner.go:130] > # Per default all metrics are enabled.
	I1210 07:44:13.684933  412953 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1210 07:44:13.684946  412953 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1210 07:44:13.684969  412953 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1210 07:44:13.685240  412953 command_runner.go:130] > # metrics_collectors = [
	I1210 07:44:13.685580  412953 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1210 07:44:13.685893  412953 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1210 07:44:13.686203  412953 command_runner.go:130] > # 	"containers_oom_total",
	I1210 07:44:13.686514  412953 command_runner.go:130] > # 	"processes_defunct",
	I1210 07:44:13.686821  412953 command_runner.go:130] > # 	"operations_total",
	I1210 07:44:13.687152  412953 command_runner.go:130] > # 	"operations_latency_seconds",
	I1210 07:44:13.687476  412953 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1210 07:44:13.687786  412953 command_runner.go:130] > # 	"operations_errors_total",
	I1210 07:44:13.688090  412953 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1210 07:44:13.688395  412953 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1210 07:44:13.688727  412953 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1210 07:44:13.689070  412953 command_runner.go:130] > # 	"image_pulls_success_total",
	I1210 07:44:13.689083  412953 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1210 07:44:13.689089  412953 command_runner.go:130] > # 	"containers_oom_count_total",
	I1210 07:44:13.689093  412953 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1210 07:44:13.689098  412953 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1210 07:44:13.689634  412953 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1210 07:44:13.689646  412953 command_runner.go:130] > # ]
	I1210 07:44:13.689654  412953 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1210 07:44:13.689658  412953 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1210 07:44:13.689671  412953 command_runner.go:130] > # The port on which the metrics server will listen.
	I1210 07:44:13.689696  412953 command_runner.go:130] > # metrics_port = 9090
	I1210 07:44:13.689701  412953 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1210 07:44:13.689706  412953 command_runner.go:130] > # metrics_socket = ""
	I1210 07:44:13.689716  412953 command_runner.go:130] > # The certificate for the secure metrics server.
	I1210 07:44:13.689722  412953 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1210 07:44:13.689731  412953 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1210 07:44:13.689737  412953 command_runner.go:130] > # certificate on any modification event.
	I1210 07:44:13.689741  412953 command_runner.go:130] > # metrics_cert = ""
	I1210 07:44:13.689746  412953 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1210 07:44:13.689751  412953 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1210 07:44:13.689764  412953 command_runner.go:130] > # metrics_key = ""
	I1210 07:44:13.689770  412953 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1210 07:44:13.689774  412953 command_runner.go:130] > [crio.tracing]
	I1210 07:44:13.689781  412953 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1210 07:44:13.689785  412953 command_runner.go:130] > # enable_tracing = false
	I1210 07:44:13.689792  412953 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1210 07:44:13.689799  412953 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1210 07:44:13.689806  412953 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1210 07:44:13.689833  412953 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1210 07:44:13.689842  412953 command_runner.go:130] > # CRI-O NRI configuration.
	I1210 07:44:13.689845  412953 command_runner.go:130] > [crio.nri]
	I1210 07:44:13.689850  412953 command_runner.go:130] > # Globally enable or disable NRI.
	I1210 07:44:13.689861  412953 command_runner.go:130] > # enable_nri = true
	I1210 07:44:13.689865  412953 command_runner.go:130] > # NRI socket to listen on.
	I1210 07:44:13.689873  412953 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1210 07:44:13.689877  412953 command_runner.go:130] > # NRI plugin directory to use.
	I1210 07:44:13.689882  412953 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1210 07:44:13.689890  412953 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1210 07:44:13.689894  412953 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1210 07:44:13.689900  412953 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1210 07:44:13.689965  412953 command_runner.go:130] > # nri_disable_connections = false
	I1210 07:44:13.689975  412953 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1210 07:44:13.689991  412953 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1210 07:44:13.689997  412953 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1210 07:44:13.690006  412953 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1210 07:44:13.690011  412953 command_runner.go:130] > # NRI default validator configuration.
	I1210 07:44:13.690018  412953 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1210 07:44:13.690027  412953 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1210 07:44:13.690036  412953 command_runner.go:130] > # can be restricted/rejected:
	I1210 07:44:13.690044  412953 command_runner.go:130] > # - OCI hook injection
	I1210 07:44:13.690060  412953 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1210 07:44:13.690068  412953 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1210 07:44:13.690072  412953 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1210 07:44:13.690076  412953 command_runner.go:130] > # - adjustment of linux namespaces
	I1210 07:44:13.690083  412953 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1210 07:44:13.690093  412953 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1210 07:44:13.690099  412953 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1210 07:44:13.690107  412953 command_runner.go:130] > #
	I1210 07:44:13.690111  412953 command_runner.go:130] > # [crio.nri.default_validator]
	I1210 07:44:13.690115  412953 command_runner.go:130] > # nri_enable_default_validator = false
	I1210 07:44:13.690122  412953 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1210 07:44:13.690134  412953 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1210 07:44:13.690148  412953 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1210 07:44:13.690154  412953 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1210 07:44:13.690159  412953 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1210 07:44:13.690165  412953 command_runner.go:130] > # nri_validator_required_plugins = [
	I1210 07:44:13.690168  412953 command_runner.go:130] > # ]
	I1210 07:44:13.690174  412953 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1210 07:44:13.690182  412953 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1210 07:44:13.690192  412953 command_runner.go:130] > [crio.stats]
	I1210 07:44:13.690198  412953 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1210 07:44:13.690212  412953 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1210 07:44:13.690219  412953 command_runner.go:130] > # stats_collection_period = 0
	I1210 07:44:13.690225  412953 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1210 07:44:13.690232  412953 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1210 07:44:13.690243  412953 command_runner.go:130] > # collection_period = 0
	I1210 07:44:13.692149  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.648702659Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1210 07:44:13.692177  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.648881459Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1210 07:44:13.692188  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.648978856Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1210 07:44:13.692196  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.649067965Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1210 07:44:13.692212  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.649235303Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.692221  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.649618857Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1210 07:44:13.692237  412953 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1210 07:44:13.692317  412953 cni.go:84] Creating CNI manager for ""
	I1210 07:44:13.692335  412953 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:44:13.692359  412953 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:44:13.692385  412953 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-314220 NodeName:functional-314220 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:44:13.692523  412953 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-314220"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:44:13.692606  412953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:44:13.699318  412953 command_runner.go:130] > kubeadm
	I1210 07:44:13.699338  412953 command_runner.go:130] > kubectl
	I1210 07:44:13.699343  412953 command_runner.go:130] > kubelet
	I1210 07:44:13.700197  412953 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:44:13.700295  412953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:44:13.707538  412953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 07:44:13.720130  412953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:44:13.732445  412953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1210 07:44:13.744899  412953 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:44:13.748570  412953 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 07:44:13.748818  412953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:44:13.875367  412953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:44:13.911048  412953 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220 for IP: 192.168.49.2
	I1210 07:44:13.911077  412953 certs.go:195] generating shared ca certs ...
	I1210 07:44:13.911094  412953 certs.go:227] acquiring lock for ca certs: {Name:mk8f0c12407c084bd20b81428ac0dabca9a8cbd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:44:13.911231  412953 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key
	I1210 07:44:13.911285  412953 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key
	I1210 07:44:13.911297  412953 certs.go:257] generating profile certs ...
	I1210 07:44:13.911404  412953 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key
	I1210 07:44:13.911477  412953 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key.8ae59347
	I1210 07:44:13.911525  412953 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key
	I1210 07:44:13.911539  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 07:44:13.911552  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 07:44:13.911567  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 07:44:13.911578  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 07:44:13.911593  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 07:44:13.911610  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 07:44:13.911622  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 07:44:13.911637  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 07:44:13.911683  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem (1338 bytes)
	W1210 07:44:13.911717  412953 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528_empty.pem, impossibly tiny 0 bytes
	I1210 07:44:13.911729  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:44:13.911762  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:44:13.911791  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:44:13.911819  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem (1675 bytes)
	I1210 07:44:13.911865  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 07:44:13.911900  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> /usr/share/ca-certificates/3785282.pem
	I1210 07:44:13.911918  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:13.911928  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem -> /usr/share/ca-certificates/378528.pem
	I1210 07:44:13.912577  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:44:13.931574  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 07:44:13.949287  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:44:13.966704  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:44:13.984537  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:44:14.005273  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:44:14.024726  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:44:14.043246  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:44:14.061500  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /usr/share/ca-certificates/3785282.pem (1708 bytes)
	I1210 07:44:14.078597  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:44:14.096003  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem --> /usr/share/ca-certificates/378528.pem (1338 bytes)
	I1210 07:44:14.113316  412953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:44:14.125784  412953 ssh_runner.go:195] Run: openssl version
	I1210 07:44:14.132223  412953 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 07:44:14.132300  412953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.139621  412953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3785282.pem /etc/ssl/certs/3785282.pem
	I1210 07:44:14.146891  412953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.150749  412953 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 07:35 /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.150804  412953 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 07:35 /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.150854  412953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.191223  412953 command_runner.go:130] > 3ec20f2e
	I1210 07:44:14.191672  412953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:44:14.199095  412953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.206573  412953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:44:14.214321  412953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.218345  412953 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 07:25 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.218446  412953 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 07:25 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.218516  412953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.259240  412953 command_runner.go:130] > b5213941
	I1210 07:44:14.259776  412953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:44:14.267399  412953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.274814  412953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/378528.pem /etc/ssl/certs/378528.pem
	I1210 07:44:14.282253  412953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.286034  412953 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 07:35 /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.286101  412953 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 07:35 /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.286170  412953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.327536  412953 command_runner.go:130] > 51391683
	I1210 07:44:14.327674  412953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:44:14.335034  412953 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:44:14.338581  412953 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:44:14.338609  412953 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1210 07:44:14.338616  412953 command_runner.go:130] > Device: 259,1	Inode: 1322411     Links: 1
	I1210 07:44:14.338623  412953 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 07:44:14.338628  412953 command_runner.go:130] > Access: 2025-12-10 07:40:07.276287392 +0000
	I1210 07:44:14.338634  412953 command_runner.go:130] > Modify: 2025-12-10 07:36:02.667723996 +0000
	I1210 07:44:14.338639  412953 command_runner.go:130] > Change: 2025-12-10 07:36:02.667723996 +0000
	I1210 07:44:14.338644  412953 command_runner.go:130] >  Birth: 2025-12-10 07:36:02.667723996 +0000
	I1210 07:44:14.338702  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:44:14.379186  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.379683  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:44:14.420781  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.421255  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:44:14.461926  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.462055  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:44:14.509912  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.510522  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:44:14.558004  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.558477  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:44:14.599044  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.599455  412953 kubeadm.go:401] StartCluster: {Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:44:14.599550  412953 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:44:14.599615  412953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:44:14.630244  412953 cri.go:89] found id: ""
	I1210 07:44:14.630352  412953 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:44:14.638132  412953 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 07:44:14.638152  412953 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 07:44:14.638158  412953 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 07:44:14.638171  412953 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:44:14.638176  412953 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:44:14.638225  412953 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:44:14.645608  412953 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:44:14.646002  412953 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-314220" does not appear in /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:14.646112  412953 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-376671/kubeconfig needs updating (will repair): [kubeconfig missing "functional-314220" cluster setting kubeconfig missing "functional-314220" context setting]
	I1210 07:44:14.646387  412953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/kubeconfig: {Name:mk62f1b4a63164643ed31a5211abdadfc49db863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:44:14.646808  412953 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:14.646962  412953 kapi.go:59] client config for functional-314220: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key", CAFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 07:44:14.647769  412953 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 07:44:14.647791  412953 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 07:44:14.647797  412953 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 07:44:14.647801  412953 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 07:44:14.647806  412953 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 07:44:14.647858  412953 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 07:44:14.648134  412953 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:44:14.656007  412953 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1210 07:44:14.656041  412953 kubeadm.go:602] duration metric: took 17.859608ms to restartPrimaryControlPlane
	I1210 07:44:14.656051  412953 kubeadm.go:403] duration metric: took 56.601079ms to StartCluster
	I1210 07:44:14.656066  412953 settings.go:142] acquiring lock: {Name:mk83336eaf1e9f7632e16e15e8d9e14eb0e0d0c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:44:14.656132  412953 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:14.656799  412953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/kubeconfig: {Name:mk62f1b4a63164643ed31a5211abdadfc49db863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:44:14.657004  412953 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 07:44:14.657416  412953 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:44:14.657431  412953 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:44:14.658092  412953 addons.go:70] Setting storage-provisioner=true in profile "functional-314220"
	I1210 07:44:14.658110  412953 addons.go:239] Setting addon storage-provisioner=true in "functional-314220"
	I1210 07:44:14.658137  412953 host.go:66] Checking if "functional-314220" exists ...
	I1210 07:44:14.658702  412953 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:44:14.665050  412953 addons.go:70] Setting default-storageclass=true in profile "functional-314220"
	I1210 07:44:14.665125  412953 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-314220"
	I1210 07:44:14.665550  412953 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:44:14.671074  412953 out.go:179] * Verifying Kubernetes components...
	I1210 07:44:14.676445  412953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:44:14.698425  412953 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:44:14.701187  412953 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:14.701211  412953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:44:14.701278  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:14.705662  412953 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:14.705841  412953 kapi.go:59] client config for functional-314220: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key", CAFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 07:44:14.706176  412953 addons.go:239] Setting addon default-storageclass=true in "functional-314220"
	I1210 07:44:14.706207  412953 host.go:66] Checking if "functional-314220" exists ...
	I1210 07:44:14.706646  412953 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:44:14.744732  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:14.744810  412953 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:14.744830  412953 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:44:14.744900  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:14.778977  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:14.876345  412953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:44:14.912899  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:14.922881  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:15.662190  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:15.662227  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.662277  412953 retry.go:31] will retry after 311.954263ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.662347  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:15.662381  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.662389  412953 retry.go:31] will retry after 234.07921ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.662447  412953 node_ready.go:35] waiting up to 6m0s for node "functional-314220" to be "Ready" ...
	I1210 07:44:15.662665  412953 type.go:168] "Request Body" body=""
	I1210 07:44:15.662765  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:15.663157  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:15.897488  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:15.957295  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:15.957408  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.957431  412953 retry.go:31] will retry after 307.155853ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.974530  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:16.030916  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:16.034621  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.034655  412953 retry.go:31] will retry after 246.948718ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.162840  412953 type.go:168] "Request Body" body=""
	I1210 07:44:16.162973  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:16.163310  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:16.265735  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:16.282284  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:16.335651  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:16.339071  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.339103  412953 retry.go:31] will retry after 647.058742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.361763  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:16.361804  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.361822  412953 retry.go:31] will retry after 514.560746ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.663231  412953 type.go:168] "Request Body" body=""
	I1210 07:44:16.663327  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:16.663641  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:16.877219  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:16.942769  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:16.942876  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.942918  412953 retry.go:31] will retry after 1.098847883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.987296  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:17.051987  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:17.055923  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:17.055964  412953 retry.go:31] will retry after 522.145884ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:17.163324  412953 type.go:168] "Request Body" body=""
	I1210 07:44:17.163405  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:17.163711  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:17.578391  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:17.635896  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:17.639746  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:17.639777  412953 retry.go:31] will retry after 768.766099ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:17.662946  412953 type.go:168] "Request Body" body=""
	I1210 07:44:17.663049  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:17.663399  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:17.663474  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:18.042986  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:18.101043  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:18.104777  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:18.104811  412953 retry.go:31] will retry after 877.527078ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:18.163066  412953 type.go:168] "Request Body" body=""
	I1210 07:44:18.163146  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:18.163494  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:18.409040  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:18.473157  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:18.473195  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:18.473221  412953 retry.go:31] will retry after 1.043117699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:18.663503  412953 type.go:168] "Request Body" body=""
	I1210 07:44:18.663629  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:18.663908  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:18.983598  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:19.054379  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:19.057795  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:19.057861  412953 retry.go:31] will retry after 2.806616267s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:19.163140  412953 type.go:168] "Request Body" body=""
	I1210 07:44:19.163219  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:19.163514  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:19.517094  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:19.577109  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:19.577146  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:19.577191  412953 retry.go:31] will retry after 2.260515502s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:19.663401  412953 type.go:168] "Request Body" body=""
	I1210 07:44:19.663487  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:19.663841  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:19.663910  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:20.163656  412953 type.go:168] "Request Body" body=""
	I1210 07:44:20.163728  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:20.164096  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:20.662791  412953 type.go:168] "Request Body" body=""
	I1210 07:44:20.662881  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:20.663196  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:21.162812  412953 type.go:168] "Request Body" body=""
	I1210 07:44:21.162886  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:21.163185  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:21.662729  412953 type.go:168] "Request Body" body=""
	I1210 07:44:21.662808  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:21.663095  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:21.838627  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:21.865153  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:21.916464  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:21.916504  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:21.916523  412953 retry.go:31] will retry after 2.650338189s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:21.931641  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:21.931686  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:21.931712  412953 retry.go:31] will retry after 2.932548046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:22.163174  412953 type.go:168] "Request Body" body=""
	I1210 07:44:22.163252  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:22.163593  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:22.163668  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:22.663491  412953 type.go:168] "Request Body" body=""
	I1210 07:44:22.663596  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:22.663955  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:23.162683  412953 type.go:168] "Request Body" body=""
	I1210 07:44:23.162754  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:23.163096  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:23.662729  412953 type.go:168] "Request Body" body=""
	I1210 07:44:23.662804  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:23.663201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:24.162801  412953 type.go:168] "Request Body" body=""
	I1210 07:44:24.162914  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:24.163280  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:24.567824  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:24.621746  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:24.625216  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:24.625246  412953 retry.go:31] will retry after 7.727905191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:24.663687  412953 type.go:168] "Request Body" body=""
	I1210 07:44:24.663760  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:24.664012  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:24.664064  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:24.864476  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:24.921495  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:24.921557  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:24.921581  412953 retry.go:31] will retry after 3.915945796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:25.162916  412953 type.go:168] "Request Body" body=""
	I1210 07:44:25.162995  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:25.163327  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:25.663045  412953 type.go:168] "Request Body" body=""
	I1210 07:44:25.663124  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:25.663415  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:26.163115  412953 type.go:168] "Request Body" body=""
	I1210 07:44:26.163196  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:26.163520  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:26.663439  412953 type.go:168] "Request Body" body=""
	I1210 07:44:26.663518  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:26.663824  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:27.163578  412953 type.go:168] "Request Body" body=""
	I1210 07:44:27.163681  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:27.164000  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:27.164069  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:27.662756  412953 type.go:168] "Request Body" body=""
	I1210 07:44:27.662823  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:27.663205  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:28.162790  412953 type.go:168] "Request Body" body=""
	I1210 07:44:28.162864  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:28.163219  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:28.662792  412953 type.go:168] "Request Body" body=""
	I1210 07:44:28.662869  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:28.663208  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:28.838651  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:28.899244  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:28.899280  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:28.899298  412953 retry.go:31] will retry after 8.041674514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:29.162702  412953 type.go:168] "Request Body" body=""
	I1210 07:44:29.162772  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:29.163052  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:29.662768  412953 type.go:168] "Request Body" body=""
	I1210 07:44:29.662841  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:29.663169  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:29.663226  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:30.162886  412953 type.go:168] "Request Body" body=""
	I1210 07:44:30.162968  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:30.163308  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:30.662996  412953 type.go:168] "Request Body" body=""
	I1210 07:44:30.663089  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:30.663373  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:31.163117  412953 type.go:168] "Request Body" body=""
	I1210 07:44:31.163198  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:31.163590  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:31.662807  412953 type.go:168] "Request Body" body=""
	I1210 07:44:31.662884  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:31.663200  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:31.663248  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:32.162759  412953 type.go:168] "Request Body" body=""
	I1210 07:44:32.162841  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:32.163175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:32.353668  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:32.409993  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:32.413403  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:32.413432  412953 retry.go:31] will retry after 6.914628842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:32.662780  412953 type.go:168] "Request Body" body=""
	I1210 07:44:32.662856  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:32.663233  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:33.162956  412953 type.go:168] "Request Body" body=""
	I1210 07:44:33.163049  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:33.163356  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:33.662689  412953 type.go:168] "Request Body" body=""
	I1210 07:44:33.662755  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:33.663031  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:34.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:44:34.162848  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:34.163199  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:34.163258  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:34.663111  412953 type.go:168] "Request Body" body=""
	I1210 07:44:34.663191  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:34.663487  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:35.163272  412953 type.go:168] "Request Body" body=""
	I1210 07:44:35.163341  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:35.163701  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:35.663625  412953 type.go:168] "Request Body" body=""
	I1210 07:44:35.663709  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:35.664060  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:36.162762  412953 type.go:168] "Request Body" body=""
	I1210 07:44:36.162838  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:36.163191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:36.663557  412953 type.go:168] "Request Body" body=""
	I1210 07:44:36.663625  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:36.663891  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:36.663931  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:36.941565  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:36.998306  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:37.009698  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:37.009736  412953 retry.go:31] will retry after 8.728706472s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:37.163096  412953 type.go:168] "Request Body" body=""
	I1210 07:44:37.163180  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:37.163526  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:37.663088  412953 type.go:168] "Request Body" body=""
	I1210 07:44:37.663168  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:37.663465  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:38.162753  412953 type.go:168] "Request Body" body=""
	I1210 07:44:38.162830  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:38.163170  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:38.662738  412953 type.go:168] "Request Body" body=""
	I1210 07:44:38.662816  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:38.663200  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:39.162911  412953 type.go:168] "Request Body" body=""
	I1210 07:44:39.162982  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:39.163311  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:39.163365  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:39.328689  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:39.391413  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:39.391461  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:39.391479  412953 retry.go:31] will retry after 20.069023813s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:39.663623  412953 type.go:168] "Request Body" body=""
	I1210 07:44:39.663692  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:39.664007  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:40.163717  412953 type.go:168] "Request Body" body=""
	I1210 07:44:40.163789  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:40.164098  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:40.662854  412953 type.go:168] "Request Body" body=""
	I1210 07:44:40.662926  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:40.663261  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:41.163240  412953 type.go:168] "Request Body" body=""
	I1210 07:44:41.163310  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:41.163588  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:41.163638  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:41.663374  412953 type.go:168] "Request Body" body=""
	I1210 07:44:41.663448  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:41.663787  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:42.163614  412953 type.go:168] "Request Body" body=""
	I1210 07:44:42.163700  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:42.164110  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:42.662817  412953 type.go:168] "Request Body" body=""
	I1210 07:44:42.662893  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:42.663220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:43.162788  412953 type.go:168] "Request Body" body=""
	I1210 07:44:43.162930  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:43.163267  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:43.662791  412953 type.go:168] "Request Body" body=""
	I1210 07:44:43.662874  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:43.663242  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:43.663300  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:44.162963  412953 type.go:168] "Request Body" body=""
	I1210 07:44:44.163057  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:44.163345  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:44.663123  412953 type.go:168] "Request Body" body=""
	I1210 07:44:44.663199  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:44.663539  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:45.163160  412953 type.go:168] "Request Body" body=""
	I1210 07:44:45.163248  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:45.163618  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:45.663558  412953 type.go:168] "Request Body" body=""
	I1210 07:44:45.663640  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:45.663930  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:45.663983  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:45.739308  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:45.803966  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:45.804014  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:45.804032  412953 retry.go:31] will retry after 15.619557427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:46.163368  412953 type.go:168] "Request Body" body=""
	I1210 07:44:46.163449  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:46.163809  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:46.663723  412953 type.go:168] "Request Body" body=""
	I1210 07:44:46.663804  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:46.664157  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:47.162830  412953 type.go:168] "Request Body" body=""
	I1210 07:44:47.162904  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:47.163246  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:47.662803  412953 type.go:168] "Request Body" body=""
	I1210 07:44:47.662878  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:47.663201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:48.162784  412953 type.go:168] "Request Body" body=""
	I1210 07:44:48.162868  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:48.163231  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:48.163295  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:48.662914  412953 type.go:168] "Request Body" body=""
	I1210 07:44:48.662989  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:48.663322  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:49.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:44:49.162856  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:49.163180  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:49.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:44:49.662861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:49.663191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:50.162736  412953 type.go:168] "Request Body" body=""
	I1210 07:44:50.162810  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:50.163100  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:50.662994  412953 type.go:168] "Request Body" body=""
	I1210 07:44:50.663140  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:50.663480  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:50.663536  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:51.163315  412953 type.go:168] "Request Body" body=""
	I1210 07:44:51.163397  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:51.163738  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:51.663484  412953 type.go:168] "Request Body" body=""
	I1210 07:44:51.663554  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:51.663817  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:52.163592  412953 type.go:168] "Request Body" body=""
	I1210 07:44:52.163675  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:52.164024  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:52.663725  412953 type.go:168] "Request Body" body=""
	I1210 07:44:52.663805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:52.664170  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:52.664269  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:53.162915  412953 type.go:168] "Request Body" body=""
	I1210 07:44:53.162989  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:53.163353  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:53.662767  412953 type.go:168] "Request Body" body=""
	I1210 07:44:53.662844  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:53.663173  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:54.162859  412953 type.go:168] "Request Body" body=""
	I1210 07:44:54.162935  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:54.163287  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:54.663094  412953 type.go:168] "Request Body" body=""
	I1210 07:44:54.663170  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:54.663454  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:55.163141  412953 type.go:168] "Request Body" body=""
	I1210 07:44:55.163215  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:55.163544  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:55.163602  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:55.663472  412953 type.go:168] "Request Body" body=""
	I1210 07:44:55.663544  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:55.663857  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:56.163640  412953 type.go:168] "Request Body" body=""
	I1210 07:44:56.163716  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:56.163996  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:56.662721  412953 type.go:168] "Request Body" body=""
	I1210 07:44:56.662805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:56.663197  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:57.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:44:57.162851  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:57.163199  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:57.662711  412953 type.go:168] "Request Body" body=""
	I1210 07:44:57.662787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:57.663158  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:57.663213  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:58.162868  412953 type.go:168] "Request Body" body=""
	I1210 07:44:58.162941  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:58.163282  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:58.662753  412953 type.go:168] "Request Body" body=""
	I1210 07:44:58.662830  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:58.663191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:59.162735  412953 type.go:168] "Request Body" body=""
	I1210 07:44:59.162805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:59.163143  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:59.460756  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:59.515959  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:59.519313  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:59.519349  412953 retry.go:31] will retry after 28.214559207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:59.663650  412953 type.go:168] "Request Body" body=""
	I1210 07:44:59.663726  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:59.664046  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:59.664099  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:00.162860  412953 type.go:168] "Request Body" body=""
	I1210 07:45:00.162952  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:00.163293  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:00.671201  412953 type.go:168] "Request Body" body=""
	I1210 07:45:00.671283  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:00.671619  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:01.163405  412953 type.go:168] "Request Body" body=""
	I1210 07:45:01.163498  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:01.163856  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:01.424291  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:45:01.504370  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:01.504408  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:01.504426  412953 retry.go:31] will retry after 11.28420248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:01.662884  412953 type.go:168] "Request Body" body=""
	I1210 07:45:01.662972  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:01.663364  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:02.162730  412953 type.go:168] "Request Body" body=""
	I1210 07:45:02.162802  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:02.163079  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:02.163130  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:02.662859  412953 type.go:168] "Request Body" body=""
	I1210 07:45:02.662943  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:02.663293  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:03.163040  412953 type.go:168] "Request Body" body=""
	I1210 07:45:03.163132  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:03.163447  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:03.663127  412953 type.go:168] "Request Body" body=""
	I1210 07:45:03.663202  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:03.663476  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:04.162835  412953 type.go:168] "Request Body" body=""
	I1210 07:45:04.162911  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:04.163270  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:04.163341  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:04.663266  412953 type.go:168] "Request Body" body=""
	I1210 07:45:04.663342  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:04.663667  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:05.163290  412953 type.go:168] "Request Body" body=""
	I1210 07:45:05.163383  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:05.163646  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:05.663681  412953 type.go:168] "Request Body" body=""
	I1210 07:45:05.663763  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:05.664122  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:06.162850  412953 type.go:168] "Request Body" body=""
	I1210 07:45:06.162937  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:06.163320  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:06.163376  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:06.663060  412953 type.go:168] "Request Body" body=""
	I1210 07:45:06.663135  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:06.663501  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:07.163345  412953 type.go:168] "Request Body" body=""
	I1210 07:45:07.163422  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:07.163774  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:07.663630  412953 type.go:168] "Request Body" body=""
	I1210 07:45:07.663719  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:07.664148  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:08.162743  412953 type.go:168] "Request Body" body=""
	I1210 07:45:08.162831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:08.163182  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:08.662764  412953 type.go:168] "Request Body" body=""
	I1210 07:45:08.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:08.663190  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:08.663246  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:09.162931  412953 type.go:168] "Request Body" body=""
	I1210 07:45:09.163030  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:09.163428  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:09.663118  412953 type.go:168] "Request Body" body=""
	I1210 07:45:09.663196  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:09.663528  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:10.163340  412953 type.go:168] "Request Body" body=""
	I1210 07:45:10.163412  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:10.163740  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:10.663633  412953 type.go:168] "Request Body" body=""
	I1210 07:45:10.663733  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:10.664152  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:10.664210  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:11.162858  412953 type.go:168] "Request Body" body=""
	I1210 07:45:11.162931  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:11.163204  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:11.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:45:11.662878  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:11.663220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:12.162791  412953 type.go:168] "Request Body" body=""
	I1210 07:45:12.162863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:12.163204  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:12.662891  412953 type.go:168] "Request Body" body=""
	I1210 07:45:12.662962  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:12.663257  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:12.789627  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:45:12.850283  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:12.850328  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:12.850347  412953 retry.go:31] will retry after 28.725170788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:13.162732  412953 type.go:168] "Request Body" body=""
	I1210 07:45:13.162821  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:13.163228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:13.163286  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:13.662967  412953 type.go:168] "Request Body" body=""
	I1210 07:45:13.663064  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:13.663335  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:14.162706  412953 type.go:168] "Request Body" body=""
	I1210 07:45:14.162805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:14.163117  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:14.663061  412953 type.go:168] "Request Body" body=""
	I1210 07:45:14.663142  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:14.663460  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:15.163177  412953 type.go:168] "Request Body" body=""
	I1210 07:45:15.163253  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:15.163541  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:15.163586  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:15.663389  412953 type.go:168] "Request Body" body=""
	I1210 07:45:15.663465  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:15.663723  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:16.163504  412953 type.go:168] "Request Body" body=""
	I1210 07:45:16.163583  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:16.163918  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:16.663729  412953 type.go:168] "Request Body" body=""
	I1210 07:45:16.663810  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:16.664140  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:17.162749  412953 type.go:168] "Request Body" body=""
	I1210 07:45:17.162824  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:17.163138  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:17.662762  412953 type.go:168] "Request Body" body=""
	I1210 07:45:17.662834  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:17.663192  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:17.663252  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:18.162797  412953 type.go:168] "Request Body" body=""
	I1210 07:45:18.162875  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:18.163247  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:18.662659  412953 type.go:168] "Request Body" body=""
	I1210 07:45:18.662732  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:18.663041  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:19.162730  412953 type.go:168] "Request Body" body=""
	I1210 07:45:19.162836  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:19.163176  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:19.662762  412953 type.go:168] "Request Body" body=""
	I1210 07:45:19.662843  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:19.663196  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:20.162705  412953 type.go:168] "Request Body" body=""
	I1210 07:45:20.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:20.163094  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:20.163139  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:20.662988  412953 type.go:168] "Request Body" body=""
	I1210 07:45:20.663092  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:20.663402  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:21.162784  412953 type.go:168] "Request Body" body=""
	I1210 07:45:21.162867  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:21.163180  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:21.662681  412953 type.go:168] "Request Body" body=""
	I1210 07:45:21.662754  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:21.663104  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:22.162781  412953 type.go:168] "Request Body" body=""
	I1210 07:45:22.162857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:22.163226  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:22.163284  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:22.662965  412953 type.go:168] "Request Body" body=""
	I1210 07:45:22.663056  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:22.663403  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:23.162971  412953 type.go:168] "Request Body" body=""
	I1210 07:45:23.163064  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:23.163391  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:23.663117  412953 type.go:168] "Request Body" body=""
	I1210 07:45:23.663191  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:23.663529  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:24.163373  412953 type.go:168] "Request Body" body=""
	I1210 07:45:24.163447  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:24.163793  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:24.163855  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:24.663660  412953 type.go:168] "Request Body" body=""
	I1210 07:45:24.663735  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:24.664001  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:25.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:45:25.162855  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:25.163242  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:25.662975  412953 type.go:168] "Request Body" body=""
	I1210 07:45:25.663066  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:25.663402  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:26.163074  412953 type.go:168] "Request Body" body=""
	I1210 07:45:26.163154  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:26.163422  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:26.663142  412953 type.go:168] "Request Body" body=""
	I1210 07:45:26.663225  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:26.663574  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:26.663627  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:27.163384  412953 type.go:168] "Request Body" body=""
	I1210 07:45:27.163458  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:27.163787  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:27.663537  412953 type.go:168] "Request Body" body=""
	I1210 07:45:27.663608  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:27.663914  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:27.734263  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:45:27.790479  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:27.794248  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:27.794290  412953 retry.go:31] will retry after 44.751938518s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:28.162814  412953 type.go:168] "Request Body" body=""
	I1210 07:45:28.162897  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:28.163247  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:28.662786  412953 type.go:168] "Request Body" body=""
	I1210 07:45:28.662878  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:28.663225  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:29.162904  412953 type.go:168] "Request Body" body=""
	I1210 07:45:29.162995  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:29.163308  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:29.163369  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:29.662782  412953 type.go:168] "Request Body" body=""
	I1210 07:45:29.662858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:29.663198  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:30.162922  412953 type.go:168] "Request Body" body=""
	I1210 07:45:30.162996  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:30.163332  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:30.663188  412953 type.go:168] "Request Body" body=""
	I1210 07:45:30.663257  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:30.663510  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:31.163355  412953 type.go:168] "Request Body" body=""
	I1210 07:45:31.163426  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:31.163801  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:31.163859  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:31.663621  412953 type.go:168] "Request Body" body=""
	I1210 07:45:31.663699  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:31.664023  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:32.163649  412953 type.go:168] "Request Body" body=""
	I1210 07:45:32.163722  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:32.163979  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:32.662687  412953 type.go:168] "Request Body" body=""
	I1210 07:45:32.662761  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:32.663068  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:33.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:45:33.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:33.163206  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:33.662712  412953 type.go:168] "Request Body" body=""
	I1210 07:45:33.662785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:33.663080  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:33.663132  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:34.162759  412953 type.go:168] "Request Body" body=""
	I1210 07:45:34.162838  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:34.163153  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:34.662951  412953 type.go:168] "Request Body" body=""
	I1210 07:45:34.663056  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:34.663400  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:35.162743  412953 type.go:168] "Request Body" body=""
	I1210 07:45:35.162820  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:35.163221  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:35.663143  412953 type.go:168] "Request Body" body=""
	I1210 07:45:35.663217  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:35.663541  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:35.663594  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:36.163366  412953 type.go:168] "Request Body" body=""
	I1210 07:45:36.163455  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:36.163788  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:36.663603  412953 type.go:168] "Request Body" body=""
	I1210 07:45:36.663693  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:36.664111  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:37.162779  412953 type.go:168] "Request Body" body=""
	I1210 07:45:37.162853  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:37.163211  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:37.662799  412953 type.go:168] "Request Body" body=""
	I1210 07:45:37.662884  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:37.663251  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:38.162931  412953 type.go:168] "Request Body" body=""
	I1210 07:45:38.163048  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:38.163386  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:38.163440  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:38.662746  412953 type.go:168] "Request Body" body=""
	I1210 07:45:38.662816  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:38.663178  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:39.162879  412953 type.go:168] "Request Body" body=""
	I1210 07:45:39.162950  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:39.163283  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:39.662744  412953 type.go:168] "Request Body" body=""
	I1210 07:45:39.662813  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:39.663099  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:40.162783  412953 type.go:168] "Request Body" body=""
	I1210 07:45:40.162864  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:40.163201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:40.663276  412953 type.go:168] "Request Body" body=""
	I1210 07:45:40.663359  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:40.663683  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:40.663745  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:41.163455  412953 type.go:168] "Request Body" body=""
	I1210 07:45:41.163527  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:41.163780  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:41.576476  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:45:41.640104  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:41.640160  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:41.640252  412953 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:45:41.663359  412953 type.go:168] "Request Body" body=""
	I1210 07:45:41.663436  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:41.663747  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:42.163603  412953 type.go:168] "Request Body" body=""
	I1210 07:45:42.163686  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:42.164024  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:42.663504  412953 type.go:168] "Request Body" body=""
	I1210 07:45:42.663576  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:42.663841  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:42.663882  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:43.163706  412953 type.go:168] "Request Body" body=""
	I1210 07:45:43.163778  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:43.164093  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:43.662795  412953 type.go:168] "Request Body" body=""
	I1210 07:45:43.662871  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:43.663216  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:44.163690  412953 type.go:168] "Request Body" body=""
	I1210 07:45:44.163770  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:44.164102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:44.662956  412953 type.go:168] "Request Body" body=""
	I1210 07:45:44.663053  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:44.663384  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:45.162824  412953 type.go:168] "Request Body" body=""
	I1210 07:45:45.162919  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:45.163386  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:45.163455  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:45.663325  412953 type.go:168] "Request Body" body=""
	I1210 07:45:45.663415  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:45.663778  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:46.163560  412953 type.go:168] "Request Body" body=""
	I1210 07:45:46.163643  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:46.164039  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:46.662756  412953 type.go:168] "Request Body" body=""
	I1210 07:45:46.662831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:46.663365  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:47.162707  412953 type.go:168] "Request Body" body=""
	I1210 07:45:47.162778  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:47.163063  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:47.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:45:47.662856  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:47.663225  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:47.663287  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:48.162991  412953 type.go:168] "Request Body" body=""
	I1210 07:45:48.163078  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:48.163366  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:48.662730  412953 type.go:168] "Request Body" body=""
	I1210 07:45:48.662809  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:48.663141  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:49.162823  412953 type.go:168] "Request Body" body=""
	I1210 07:45:49.162918  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:49.163244  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:49.662967  412953 type.go:168] "Request Body" body=""
	I1210 07:45:49.663054  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:49.663382  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:49.663448  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:50.162950  412953 type.go:168] "Request Body" body=""
	I1210 07:45:50.163048  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:50.163436  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:50.663184  412953 type.go:168] "Request Body" body=""
	I1210 07:45:50.663257  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:50.663570  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:51.163371  412953 type.go:168] "Request Body" body=""
	I1210 07:45:51.163453  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:51.163741  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:51.663498  412953 type.go:168] "Request Body" body=""
	I1210 07:45:51.663588  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:51.663902  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:51.663988  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:52.162782  412953 type.go:168] "Request Body" body=""
	I1210 07:45:52.162900  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:52.163274  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:52.663034  412953 type.go:168] "Request Body" body=""
	I1210 07:45:52.663107  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:52.663447  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:53.163126  412953 type.go:168] "Request Body" body=""
	I1210 07:45:53.163192  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:53.163480  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:53.663263  412953 type.go:168] "Request Body" body=""
	I1210 07:45:53.663337  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:53.663619  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:54.163395  412953 type.go:168] "Request Body" body=""
	I1210 07:45:54.163472  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:54.163802  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:54.163854  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:54.663613  412953 type.go:168] "Request Body" body=""
	I1210 07:45:54.663694  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:54.664023  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:55.163717  412953 type.go:168] "Request Body" body=""
	I1210 07:45:55.163799  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:55.164118  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:55.662998  412953 type.go:168] "Request Body" body=""
	I1210 07:45:55.663091  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:55.663450  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:56.163121  412953 type.go:168] "Request Body" body=""
	I1210 07:45:56.163190  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:56.163520  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:56.663288  412953 type.go:168] "Request Body" body=""
	I1210 07:45:56.663383  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:56.663710  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:56.663763  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:57.163429  412953 type.go:168] "Request Body" body=""
	I1210 07:45:57.163515  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:57.163853  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:57.663634  412953 type.go:168] "Request Body" body=""
	I1210 07:45:57.663715  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:57.664039  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:58.162743  412953 type.go:168] "Request Body" body=""
	I1210 07:45:58.162822  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:58.163189  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:58.662924  412953 type.go:168] "Request Body" body=""
	I1210 07:45:58.663036  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:58.663358  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:59.162715  412953 type.go:168] "Request Body" body=""
	I1210 07:45:59.162781  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:59.163074  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:59.163122  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:59.662789  412953 type.go:168] "Request Body" body=""
	I1210 07:45:59.662865  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:59.663251  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:00.162998  412953 type.go:168] "Request Body" body=""
	I1210 07:46:00.163123  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:00.163461  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:00.663540  412953 type.go:168] "Request Body" body=""
	I1210 07:46:00.663611  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:00.663865  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:01.163683  412953 type.go:168] "Request Body" body=""
	I1210 07:46:01.163757  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:01.164109  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:01.164167  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:01.662851  412953 type.go:168] "Request Body" body=""
	I1210 07:46:01.662929  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:01.663282  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:02.162953  412953 type.go:168] "Request Body" body=""
	I1210 07:46:02.163054  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:02.163311  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:02.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:46:02.662862  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:02.663241  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:03.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:46:03.162854  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:03.163220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:03.662701  412953 type.go:168] "Request Body" body=""
	I1210 07:46:03.662776  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:03.663100  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:03.663150  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:04.162777  412953 type.go:168] "Request Body" body=""
	I1210 07:46:04.162853  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:04.163234  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:04.663099  412953 type.go:168] "Request Body" body=""
	I1210 07:46:04.663184  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:04.663539  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:05.163328  412953 type.go:168] "Request Body" body=""
	I1210 07:46:05.163396  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:05.163668  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:05.663706  412953 type.go:168] "Request Body" body=""
	I1210 07:46:05.663785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:05.664109  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:05.664167  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:06.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:46:06.162864  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:06.163213  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:06.662683  412953 type.go:168] "Request Body" body=""
	I1210 07:46:06.662757  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:06.663110  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:07.162833  412953 type.go:168] "Request Body" body=""
	I1210 07:46:07.162915  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:07.163278  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:07.663001  412953 type.go:168] "Request Body" body=""
	I1210 07:46:07.663101  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:07.663452  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:08.163105  412953 type.go:168] "Request Body" body=""
	I1210 07:46:08.163173  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:08.163505  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:08.163551  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:08.663246  412953 type.go:168] "Request Body" body=""
	I1210 07:46:08.663355  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:08.663696  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:09.163360  412953 type.go:168] "Request Body" body=""
	I1210 07:46:09.163436  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:09.163764  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:09.663545  412953 type.go:168] "Request Body" body=""
	I1210 07:46:09.663613  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:09.663865  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:10.163582  412953 type.go:168] "Request Body" body=""
	I1210 07:46:10.163660  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:10.164166  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:10.164222  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:10.662978  412953 type.go:168] "Request Body" body=""
	I1210 07:46:10.663066  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:10.663429  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:11.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:46:11.162791  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:11.163072  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:11.662759  412953 type.go:168] "Request Body" body=""
	I1210 07:46:11.662839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:11.663211  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:12.162917  412953 type.go:168] "Request Body" body=""
	I1210 07:46:12.162995  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:12.163357  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:12.546948  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:46:12.609717  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:46:12.609753  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:46:12.609836  412953 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:46:12.614848  412953 out.go:179] * Enabled addons: 
	I1210 07:46:12.617540  412953 addons.go:530] duration metric: took 1m57.960111858s for enable addons: enabled=[]
	I1210 07:46:12.662919  412953 type.go:168] "Request Body" body=""
	I1210 07:46:12.663005  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:12.663288  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:12.663340  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:13.162812  412953 type.go:168] "Request Body" body=""
	I1210 07:46:13.162891  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:13.163241  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:13.662827  412953 type.go:168] "Request Body" body=""
	I1210 07:46:13.662909  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:13.663262  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:14.162721  412953 type.go:168] "Request Body" body=""
	I1210 07:46:14.162802  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:14.163126  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:14.663062  412953 type.go:168] "Request Body" body=""
	I1210 07:46:14.663130  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:14.663461  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:14.663518  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:15.163082  412953 type.go:168] "Request Body" body=""
	I1210 07:46:15.163169  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:15.163586  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:15.662688  412953 type.go:168] "Request Body" body=""
	I1210 07:46:15.662765  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:15.663046  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:16.162762  412953 type.go:168] "Request Body" body=""
	I1210 07:46:16.162843  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:16.163210  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:16.662984  412953 type.go:168] "Request Body" body=""
	I1210 07:46:16.663080  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:16.663419  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:17.162974  412953 type.go:168] "Request Body" body=""
	I1210 07:46:17.163061  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:17.163322  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:17.163365  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:17.663052  412953 type.go:168] "Request Body" body=""
	I1210 07:46:17.663135  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:17.663464  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:18.163305  412953 type.go:168] "Request Body" body=""
	I1210 07:46:18.163382  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:18.163729  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:18.663345  412953 type.go:168] "Request Body" body=""
	I1210 07:46:18.663418  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:18.663680  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:19.163480  412953 type.go:168] "Request Body" body=""
	I1210 07:46:19.163553  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:19.163943  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:19.163995  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:19.663628  412953 type.go:168] "Request Body" body=""
	I1210 07:46:19.663701  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:19.664020  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:20.162792  412953 type.go:168] "Request Body" body=""
	I1210 07:46:20.162868  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:20.163216  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:20.662994  412953 type.go:168] "Request Body" body=""
	I1210 07:46:20.663084  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:20.663424  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:21.162961  412953 type.go:168] "Request Body" body=""
	I1210 07:46:21.163061  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:21.163356  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:21.662751  412953 type.go:168] "Request Body" body=""
	I1210 07:46:21.662832  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:21.663141  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:21.663198  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:22.162889  412953 type.go:168] "Request Body" body=""
	I1210 07:46:22.162971  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:22.163363  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:22.663102  412953 type.go:168] "Request Body" body=""
	I1210 07:46:22.663178  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:22.663485  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:23.163205  412953 type.go:168] "Request Body" body=""
	I1210 07:46:23.163277  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:23.163529  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:23.663272  412953 type.go:168] "Request Body" body=""
	I1210 07:46:23.663341  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:23.663672  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:23.663725  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:24.163522  412953 type.go:168] "Request Body" body=""
	I1210 07:46:24.163596  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:24.163927  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:24.662937  412953 type.go:168] "Request Body" body=""
	I1210 07:46:24.663007  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:24.663348  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:25.162766  412953 type.go:168] "Request Body" body=""
	I1210 07:46:25.162842  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:25.163192  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:25.662762  412953 type.go:168] "Request Body" body=""
	I1210 07:46:25.662836  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:25.663200  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:26.162716  412953 type.go:168] "Request Body" body=""
	I1210 07:46:26.162788  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:26.163132  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:26.163196  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:26.662779  412953 type.go:168] "Request Body" body=""
	I1210 07:46:26.662852  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:26.663214  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:27.162966  412953 type.go:168] "Request Body" body=""
	I1210 07:46:27.163059  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:27.163400  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:27.663107  412953 type.go:168] "Request Body" body=""
	I1210 07:46:27.663178  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:27.663429  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:28.162791  412953 type.go:168] "Request Body" body=""
	I1210 07:46:28.162875  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:28.163237  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:28.163291  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:28.662824  412953 type.go:168] "Request Body" body=""
	I1210 07:46:28.662900  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:28.663270  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:29.162964  412953 type.go:168] "Request Body" body=""
	I1210 07:46:29.163094  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:29.163427  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:29.662778  412953 type.go:168] "Request Body" body=""
	I1210 07:46:29.662855  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:29.663206  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:30.162949  412953 type.go:168] "Request Body" body=""
	I1210 07:46:30.163043  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:30.163442  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:30.163519  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:30.663178  412953 type.go:168] "Request Body" body=""
	I1210 07:46:30.663251  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:30.663507  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:31.162779  412953 type.go:168] "Request Body" body=""
	I1210 07:46:31.162861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:31.163241  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:31.662779  412953 type.go:168] "Request Body" body=""
	I1210 07:46:31.662863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:31.663242  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:32.163585  412953 type.go:168] "Request Body" body=""
	I1210 07:46:32.163652  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:32.163904  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:32.163946  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:32.663710  412953 type.go:168] "Request Body" body=""
	I1210 07:46:32.663785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:32.664115  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:33.162721  412953 type.go:168] "Request Body" body=""
	I1210 07:46:33.162805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:33.163126  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:33.662701  412953 type.go:168] "Request Body" body=""
	I1210 07:46:33.662773  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:33.663075  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:34.162808  412953 type.go:168] "Request Body" body=""
	I1210 07:46:34.162883  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:34.163215  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:34.663030  412953 type.go:168] "Request Body" body=""
	I1210 07:46:34.663135  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:34.663479  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:34.663544  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:35.163296  412953 type.go:168] "Request Body" body=""
	I1210 07:46:35.163366  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:35.163632  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:35.663541  412953 type.go:168] "Request Body" body=""
	I1210 07:46:35.663615  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:35.663930  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:36.162688  412953 type.go:168] "Request Body" body=""
	I1210 07:46:36.162762  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:36.163076  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:36.662753  412953 type.go:168] "Request Body" body=""
	I1210 07:46:36.662826  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:36.663102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:37.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:46:37.162863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:37.163224  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:37.163284  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:37.662950  412953 type.go:168] "Request Body" body=""
	I1210 07:46:37.663050  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:37.663385  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:38.163090  412953 type.go:168] "Request Body" body=""
	I1210 07:46:38.163159  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:38.163458  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:38.662777  412953 type.go:168] "Request Body" body=""
	I1210 07:46:38.662848  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:38.663155  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:39.162756  412953 type.go:168] "Request Body" body=""
	I1210 07:46:39.162851  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:39.163199  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:39.662727  412953 type.go:168] "Request Body" body=""
	I1210 07:46:39.662805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:39.663239  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:39.663299  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:40.162968  412953 type.go:168] "Request Body" body=""
	I1210 07:46:40.163066  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:40.163428  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:40.663175  412953 type.go:168] "Request Body" body=""
	I1210 07:46:40.663247  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:40.663570  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:41.163330  412953 type.go:168] "Request Body" body=""
	I1210 07:46:41.163398  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:41.163670  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:41.663442  412953 type.go:168] "Request Body" body=""
	I1210 07:46:41.663514  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:41.663828  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:41.663885  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:42.163612  412953 type.go:168] "Request Body" body=""
	I1210 07:46:42.163698  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:42.164038  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:42.662671  412953 type.go:168] "Request Body" body=""
	I1210 07:46:42.662751  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:42.663040  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:43.162772  412953 type.go:168] "Request Body" body=""
	I1210 07:46:43.162852  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:43.163223  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:43.662929  412953 type.go:168] "Request Body" body=""
	I1210 07:46:43.663035  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:43.663349  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:44.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:46:44.162784  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:44.163074  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:44.163124  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:44.663067  412953 type.go:168] "Request Body" body=""
	I1210 07:46:44.663140  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:44.663477  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:45.163353  412953 type.go:168] "Request Body" body=""
	I1210 07:46:45.163451  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:45.163846  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:45.662750  412953 type.go:168] "Request Body" body=""
	I1210 07:46:45.662825  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:45.663122  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:46.162781  412953 type.go:168] "Request Body" body=""
	I1210 07:46:46.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:46.163209  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:46.163281  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:46.662789  412953 type.go:168] "Request Body" body=""
	I1210 07:46:46.662869  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:46.663244  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:47.162966  412953 type.go:168] "Request Body" body=""
	I1210 07:46:47.163051  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:47.163320  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:47.662754  412953 type.go:168] "Request Body" body=""
	I1210 07:46:47.662828  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:47.663189  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:48.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:46:48.162860  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:48.163220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:48.662721  412953 type.go:168] "Request Body" body=""
	I1210 07:46:48.662793  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:48.663100  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:48.663152  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:49.162765  412953 type.go:168] "Request Body" body=""
	I1210 07:46:49.162862  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:49.163255  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:49.662773  412953 type.go:168] "Request Body" body=""
	I1210 07:46:49.662850  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:49.663184  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:50.162728  412953 type.go:168] "Request Body" body=""
	I1210 07:46:50.162800  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:50.163110  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:50.662840  412953 type.go:168] "Request Body" body=""
	I1210 07:46:50.662940  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:50.663293  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:50.663354  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:51.163080  412953 type.go:168] "Request Body" body=""
	I1210 07:46:51.163164  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:51.163485  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:51.663128  412953 type.go:168] "Request Body" body=""
	I1210 07:46:51.663202  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:51.663559  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:52.163395  412953 type.go:168] "Request Body" body=""
	I1210 07:46:52.163475  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:52.163785  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:52.663590  412953 type.go:168] "Request Body" body=""
	I1210 07:46:52.663677  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:52.664017  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:52.664072  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:53.162668  412953 type.go:168] "Request Body" body=""
	I1210 07:46:53.162748  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:53.163064  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:53.662759  412953 type.go:168] "Request Body" body=""
	I1210 07:46:53.662839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:53.663165  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:54.162852  412953 type.go:168] "Request Body" body=""
	I1210 07:46:54.162929  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:54.163260  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:54.663174  412953 type.go:168] "Request Body" body=""
	I1210 07:46:54.663244  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:54.663519  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:55.163370  412953 type.go:168] "Request Body" body=""
	I1210 07:46:55.163453  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:55.163790  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:55.163840  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:55.663591  412953 type.go:168] "Request Body" body=""
	I1210 07:46:55.663675  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:55.664032  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:56.162718  412953 type.go:168] "Request Body" body=""
	I1210 07:46:56.162787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:56.163127  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:56.662776  412953 type.go:168] "Request Body" body=""
	I1210 07:46:56.662857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:56.663223  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:57.162920  412953 type.go:168] "Request Body" body=""
	I1210 07:46:57.162997  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:57.163350  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:57.663044  412953 type.go:168] "Request Body" body=""
	I1210 07:46:57.663119  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:57.663437  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:57.663491  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:58.162764  412953 type.go:168] "Request Body" body=""
	I1210 07:46:58.162842  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:58.163207  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:58.662916  412953 type.go:168] "Request Body" body=""
	I1210 07:46:58.662998  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:58.663346  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:59.162707  412953 type.go:168] "Request Body" body=""
	I1210 07:46:59.162786  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:59.163086  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:59.662772  412953 type.go:168] "Request Body" body=""
	I1210 07:46:59.662861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:59.663202  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:00.162842  412953 type.go:168] "Request Body" body=""
	I1210 07:47:00.162937  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:00.163453  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:00.163526  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:00.663007  412953 type.go:168] "Request Body" body=""
	I1210 07:47:00.663098  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:00.663417  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:01.162776  412953 type.go:168] "Request Body" body=""
	I1210 07:47:01.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:01.163244  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:01.662972  412953 type.go:168] "Request Body" body=""
	I1210 07:47:01.663073  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:01.663435  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:02.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:47:02.162838  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:02.163124  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:02.662798  412953 type.go:168] "Request Body" body=""
	I1210 07:47:02.662880  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:02.663234  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:02.663292  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:03.162956  412953 type.go:168] "Request Body" body=""
	I1210 07:47:03.163056  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:03.163409  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:03.663107  412953 type.go:168] "Request Body" body=""
	I1210 07:47:03.663180  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:03.663591  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:04.163359  412953 type.go:168] "Request Body" body=""
	I1210 07:47:04.163439  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:04.163771  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:04.662805  412953 type.go:168] "Request Body" body=""
	I1210 07:47:04.662883  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:04.663237  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:05.162670  412953 type.go:168] "Request Body" body=""
	I1210 07:47:05.162740  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:05.163054  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:05.163104  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:05.662994  412953 type.go:168] "Request Body" body=""
	I1210 07:47:05.663093  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:05.663427  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:06.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:47:06.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:06.163239  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:06.662861  412953 type.go:168] "Request Body" body=""
	I1210 07:47:06.662927  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:06.663190  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:07.162855  412953 type.go:168] "Request Body" body=""
	I1210 07:47:07.162985  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:07.163313  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:07.163372  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:07.662758  412953 type.go:168] "Request Body" body=""
	I1210 07:47:07.662832  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:07.663175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:08.162737  412953 type.go:168] "Request Body" body=""
	I1210 07:47:08.162808  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:08.163134  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:08.662744  412953 type.go:168] "Request Body" body=""
	I1210 07:47:08.662821  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:08.663156  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:09.162866  412953 type.go:168] "Request Body" body=""
	I1210 07:47:09.162942  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:09.163277  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:09.662758  412953 type.go:168] "Request Body" body=""
	I1210 07:47:09.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:09.663155  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:09.663202  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:10.162922  412953 type.go:168] "Request Body" body=""
	I1210 07:47:10.163005  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:10.163368  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:10.662972  412953 type.go:168] "Request Body" body=""
	I1210 07:47:10.663067  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:10.663391  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:11.163075  412953 type.go:168] "Request Body" body=""
	I1210 07:47:11.163169  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:11.163529  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:11.663298  412953 type.go:168] "Request Body" body=""
	I1210 07:47:11.663381  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:11.663711  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:11.663770  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:12.163545  412953 type.go:168] "Request Body" body=""
	I1210 07:47:12.163626  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:12.163962  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:12.663589  412953 type.go:168] "Request Body" body=""
	I1210 07:47:12.663663  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:12.663928  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:13.163691  412953 type.go:168] "Request Body" body=""
	I1210 07:47:13.163763  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:13.164095  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:13.662728  412953 type.go:168] "Request Body" body=""
	I1210 07:47:13.662802  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:13.663152  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:14.162755  412953 type.go:168] "Request Body" body=""
	I1210 07:47:14.162827  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:14.163128  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:14.163174  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:14.663040  412953 type.go:168] "Request Body" body=""
	I1210 07:47:14.663122  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:14.663453  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:15.163172  412953 type.go:168] "Request Body" body=""
	I1210 07:47:15.163245  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:15.163583  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:15.663556  412953 type.go:168] "Request Body" body=""
	I1210 07:47:15.663626  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:15.663897  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:16.162681  412953 type.go:168] "Request Body" body=""
	I1210 07:47:16.162760  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:16.163086  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:16.662812  412953 type.go:168] "Request Body" body=""
	I1210 07:47:16.662888  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:16.663240  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:16.663299  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:17.162710  412953 type.go:168] "Request Body" body=""
	I1210 07:47:17.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:17.163065  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:17.662764  412953 type.go:168] "Request Body" body=""
	I1210 07:47:17.662843  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:17.663184  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:18.162904  412953 type.go:168] "Request Body" body=""
	I1210 07:47:18.162985  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:18.163341  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:18.662709  412953 type.go:168] "Request Body" body=""
	I1210 07:47:18.662779  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:18.663072  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:19.162752  412953 type.go:168] "Request Body" body=""
	I1210 07:47:19.162825  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:19.163160  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:19.163217  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:19.662756  412953 type.go:168] "Request Body" body=""
	I1210 07:47:19.662831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:19.663169  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:20.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:47:20.162785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:20.163102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:20.662785  412953 type.go:168] "Request Body" body=""
	I1210 07:47:20.662863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:20.663210  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:21.162946  412953 type.go:168] "Request Body" body=""
	I1210 07:47:21.163041  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:21.163356  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:21.163416  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:21.662705  412953 type.go:168] "Request Body" body=""
	I1210 07:47:21.662777  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:21.663055  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:22.162766  412953 type.go:168] "Request Body" body=""
	I1210 07:47:22.162839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:22.163199  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:22.662780  412953 type.go:168] "Request Body" body=""
	I1210 07:47:22.662854  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:22.663212  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:23.163429  412953 type.go:168] "Request Body" body=""
	I1210 07:47:23.163503  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:23.163746  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:23.163785  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:23.663518  412953 type.go:168] "Request Body" body=""
	I1210 07:47:23.663593  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:23.663930  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:24.163583  412953 type.go:168] "Request Body" body=""
	I1210 07:47:24.163661  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:24.164012  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:24.662895  412953 type.go:168] "Request Body" body=""
	I1210 07:47:24.662963  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:24.663238  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:25.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:47:25.162856  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:25.163189  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:25.662776  412953 type.go:168] "Request Body" body=""
	I1210 07:47:25.662854  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:25.663185  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:25.663235  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:26.162705  412953 type.go:168] "Request Body" body=""
	I1210 07:47:26.162780  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:26.163095  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:26.662818  412953 type.go:168] "Request Body" body=""
	I1210 07:47:26.662889  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:26.663264  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:27.162833  412953 type.go:168] "Request Body" body=""
	I1210 07:47:27.162913  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:27.163219  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:27.662705  412953 type.go:168] "Request Body" body=""
	I1210 07:47:27.662774  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:27.663040  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:28.162741  412953 type.go:168] "Request Body" body=""
	I1210 07:47:28.162822  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:28.163180  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:28.163235  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:28.662765  412953 type.go:168] "Request Body" body=""
	I1210 07:47:28.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:28.663191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:29.163595  412953 type.go:168] "Request Body" body=""
	I1210 07:47:29.163681  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:29.163938  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:29.663716  412953 type.go:168] "Request Body" body=""
	I1210 07:47:29.663791  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:29.664120  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:30.162802  412953 type.go:168] "Request Body" body=""
	I1210 07:47:30.162881  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:30.163228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:30.163277  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:30.662975  412953 type.go:168] "Request Body" body=""
	I1210 07:47:30.663060  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:30.663309  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:31.162750  412953 type.go:168] "Request Body" body=""
	I1210 07:47:31.162823  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:31.163180  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:31.662903  412953 type.go:168] "Request Body" body=""
	I1210 07:47:31.662983  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:31.663348  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:32.163059  412953 type.go:168] "Request Body" body=""
	I1210 07:47:32.163151  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:32.163486  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:32.163538  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:32.663271  412953 type.go:168] "Request Body" body=""
	I1210 07:47:32.663354  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:32.663709  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:33.163521  412953 type.go:168] "Request Body" body=""
	I1210 07:47:33.163606  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:33.163923  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:33.663552  412953 type.go:168] "Request Body" body=""
	I1210 07:47:33.663621  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:33.663890  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:34.162672  412953 type.go:168] "Request Body" body=""
	I1210 07:47:34.162756  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:34.163144  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:34.662973  412953 type.go:168] "Request Body" body=""
	I1210 07:47:34.663067  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:34.663388  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:34.663446  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:35.163091  412953 type.go:168] "Request Body" body=""
	I1210 07:47:35.163163  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:35.163426  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:35.663452  412953 type.go:168] "Request Body" body=""
	I1210 07:47:35.663527  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:35.663843  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:36.163648  412953 type.go:168] "Request Body" body=""
	I1210 07:47:36.163726  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:36.164083  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:36.662824  412953 type.go:168] "Request Body" body=""
	I1210 07:47:36.662911  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:36.663202  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:37.162888  412953 type.go:168] "Request Body" body=""
	I1210 07:47:37.162961  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:37.163292  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:37.163351  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:37.663060  412953 type.go:168] "Request Body" body=""
	I1210 07:47:37.663139  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:37.663481  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:38.163223  412953 type.go:168] "Request Body" body=""
	I1210 07:47:38.163297  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:38.163629  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:38.663452  412953 type.go:168] "Request Body" body=""
	I1210 07:47:38.663533  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:38.663884  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:39.163604  412953 type.go:168] "Request Body" body=""
	I1210 07:47:39.163682  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:39.164033  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:39.164085  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:39.662709  412953 type.go:168] "Request Body" body=""
	I1210 07:47:39.662798  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:39.663102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:40.162773  412953 type.go:168] "Request Body" body=""
	I1210 07:47:40.162850  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:40.163206  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:40.663034  412953 type.go:168] "Request Body" body=""
	I1210 07:47:40.663108  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:40.663451  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:41.162787  412953 type.go:168] "Request Body" body=""
	I1210 07:47:41.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:41.163166  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:41.662816  412953 type.go:168] "Request Body" body=""
	I1210 07:47:41.662924  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:41.663355  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:41.663428  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:42.162833  412953 type.go:168] "Request Body" body=""
	I1210 07:47:42.162931  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:42.163383  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:42.662719  412953 type.go:168] "Request Body" body=""
	I1210 07:47:42.662787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:42.663059  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:43.162773  412953 type.go:168] "Request Body" body=""
	I1210 07:47:43.162848  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:43.163226  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:43.662792  412953 type.go:168] "Request Body" body=""
	I1210 07:47:43.662865  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:43.663241  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:44.162732  412953 type.go:168] "Request Body" body=""
	I1210 07:47:44.162801  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:44.163080  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:44.163121  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:44.663044  412953 type.go:168] "Request Body" body=""
	I1210 07:47:44.663116  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:44.663433  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:45.163203  412953 type.go:168] "Request Body" body=""
	I1210 07:47:45.163296  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:45.163726  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:45.662709  412953 type.go:168] "Request Body" body=""
	I1210 07:47:45.662794  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:45.663175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:46.162894  412953 type.go:168] "Request Body" body=""
	I1210 07:47:46.162972  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:46.163291  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:46.163350  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:46.662780  412953 type.go:168] "Request Body" body=""
	I1210 07:47:46.662853  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:46.663154  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:47.162673  412953 type.go:168] "Request Body" body=""
	I1210 07:47:47.162741  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:47.163000  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:47.662781  412953 type.go:168] "Request Body" body=""
	I1210 07:47:47.662882  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:47.663282  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:48.163055  412953 type.go:168] "Request Body" body=""
	I1210 07:47:48.163152  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:48.163450  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:48.163501  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:48.663148  412953 type.go:168] "Request Body" body=""
	I1210 07:47:48.663224  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:48.663521  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:49.163373  412953 type.go:168] "Request Body" body=""
	I1210 07:47:49.163457  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:49.163828  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:49.663672  412953 type.go:168] "Request Body" body=""
	I1210 07:47:49.663767  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:49.664111  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:50.163649  412953 type.go:168] "Request Body" body=""
	I1210 07:47:50.163722  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:50.163974  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:50.164024  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:50.662959  412953 type.go:168] "Request Body" body=""
	I1210 07:47:50.663053  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:50.663390  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:51.163109  412953 type.go:168] "Request Body" body=""
	I1210 07:47:51.163226  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:51.163573  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:51.663350  412953 type.go:168] "Request Body" body=""
	I1210 07:47:51.663424  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:51.663681  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:52.163449  412953 type.go:168] "Request Body" body=""
	I1210 07:47:52.163526  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:52.163814  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:52.663598  412953 type.go:168] "Request Body" body=""
	I1210 07:47:52.663677  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:52.664036  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:52.664093  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:53.162754  412953 type.go:168] "Request Body" body=""
	I1210 07:47:53.162833  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:53.163184  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:53.662766  412953 type.go:168] "Request Body" body=""
	I1210 07:47:53.662839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:53.663209  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:54.162947  412953 type.go:168] "Request Body" body=""
	I1210 07:47:54.163051  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:54.163402  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:54.663120  412953 type.go:168] "Request Body" body=""
	I1210 07:47:54.663194  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:54.663486  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:55.163337  412953 type.go:168] "Request Body" body=""
	I1210 07:47:55.163413  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:55.163797  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:55.163853  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:55.663646  412953 type.go:168] "Request Body" body=""
	I1210 07:47:55.663720  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:55.664034  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:56.162701  412953 type.go:168] "Request Body" body=""
	I1210 07:47:56.162774  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:56.163098  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:56.662775  412953 type.go:168] "Request Body" body=""
	I1210 07:47:56.662859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:56.663186  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:57.162786  412953 type.go:168] "Request Body" body=""
	I1210 07:47:57.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:57.163228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:57.663510  412953 type.go:168] "Request Body" body=""
	I1210 07:47:57.663581  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:57.663872  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:57.663929  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:58.163725  412953 type.go:168] "Request Body" body=""
	I1210 07:47:58.163813  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:58.164168  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:58.662866  412953 type.go:168] "Request Body" body=""
	I1210 07:47:58.662937  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:58.663291  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:59.163430  412953 type.go:168] "Request Body" body=""
	I1210 07:47:59.163502  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:59.163755  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:59.663519  412953 type.go:168] "Request Body" body=""
	I1210 07:47:59.663597  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:59.663931  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:59.663987  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:00.163719  412953 type.go:168] "Request Body" body=""
	I1210 07:48:00.163813  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:00.164225  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:00.663315  412953 type.go:168] "Request Body" body=""
	I1210 07:48:00.663399  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:00.663675  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:01.163489  412953 type.go:168] "Request Body" body=""
	I1210 07:48:01.163567  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:01.163893  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:01.663702  412953 type.go:168] "Request Body" body=""
	I1210 07:48:01.663781  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:01.664135  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:01.664198  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:02.162817  412953 type.go:168] "Request Body" body=""
	I1210 07:48:02.162886  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:02.163184  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:02.662884  412953 type.go:168] "Request Body" body=""
	I1210 07:48:02.662971  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:02.663338  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:03.162784  412953 type.go:168] "Request Body" body=""
	I1210 07:48:03.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:03.163213  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:03.662759  412953 type.go:168] "Request Body" body=""
	I1210 07:48:03.662836  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:03.663120  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:04.162778  412953 type.go:168] "Request Body" body=""
	I1210 07:48:04.162852  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:04.163191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:04.163249  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:04.663059  412953 type.go:168] "Request Body" body=""
	I1210 07:48:04.663141  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:04.663463  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:05.162715  412953 type.go:168] "Request Body" body=""
	I1210 07:48:05.162795  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:05.163131  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:05.663057  412953 type.go:168] "Request Body" body=""
	I1210 07:48:05.663143  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:05.663537  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:06.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:48:06.162864  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:06.163194  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:06.662713  412953 type.go:168] "Request Body" body=""
	I1210 07:48:06.662786  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:06.663074  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:06.663118  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:07.162740  412953 type.go:168] "Request Body" body=""
	I1210 07:48:07.162814  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:07.163183  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:07.662797  412953 type.go:168] "Request Body" body=""
	I1210 07:48:07.662880  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:07.663236  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:08.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:48:08.162871  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:08.163193  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:08.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:48:08.662859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:08.663191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:08.663248  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:09.162797  412953 type.go:168] "Request Body" body=""
	I1210 07:48:09.162876  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:09.163240  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:09.662795  412953 type.go:168] "Request Body" body=""
	I1210 07:48:09.662869  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:09.663133  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:10.162749  412953 type.go:168] "Request Body" body=""
	I1210 07:48:10.162821  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:10.163157  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:10.662788  412953 type.go:168] "Request Body" body=""
	I1210 07:48:10.662861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:10.663447  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:10.663517  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:11.163125  412953 type.go:168] "Request Body" body=""
	I1210 07:48:11.163197  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:11.163452  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:11.663284  412953 type.go:168] "Request Body" body=""
	I1210 07:48:11.663356  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:11.663683  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:12.163495  412953 type.go:168] "Request Body" body=""
	I1210 07:48:12.163578  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:12.163924  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:12.663702  412953 type.go:168] "Request Body" body=""
	I1210 07:48:12.663773  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:12.664091  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:12.664142  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:13.162781  412953 type.go:168] "Request Body" body=""
	I1210 07:48:13.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:13.163174  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:13.662779  412953 type.go:168] "Request Body" body=""
	I1210 07:48:13.662857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:13.663203  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:14.162879  412953 type.go:168] "Request Body" body=""
	I1210 07:48:14.162951  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:14.163288  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:14.663062  412953 type.go:168] "Request Body" body=""
	I1210 07:48:14.663137  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:14.663477  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:15.163290  412953 type.go:168] "Request Body" body=""
	I1210 07:48:15.163371  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:15.163703  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:15.163759  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:15.662995  412953 type.go:168] "Request Body" body=""
	I1210 07:48:15.663090  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:15.663504  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:16.163293  412953 type.go:168] "Request Body" body=""
	I1210 07:48:16.163371  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:16.163704  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:16.663391  412953 type.go:168] "Request Body" body=""
	I1210 07:48:16.663468  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:16.663824  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:17.163573  412953 type.go:168] "Request Body" body=""
	I1210 07:48:17.163645  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:17.163900  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:17.163949  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:17.662654  412953 type.go:168] "Request Body" body=""
	I1210 07:48:17.662730  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:17.663090  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:18.162799  412953 type.go:168] "Request Body" body=""
	I1210 07:48:18.162871  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:18.163220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:18.662707  412953 type.go:168] "Request Body" body=""
	I1210 07:48:18.662777  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:18.663073  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:19.162733  412953 type.go:168] "Request Body" body=""
	I1210 07:48:19.162808  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:19.163121  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:19.662805  412953 type.go:168] "Request Body" body=""
	I1210 07:48:19.662883  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:19.663227  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:19.663286  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:20.162738  412953 type.go:168] "Request Body" body=""
	I1210 07:48:20.162812  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:20.163160  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:20.662944  412953 type.go:168] "Request Body" body=""
	I1210 07:48:20.663033  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:20.663345  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:21.162798  412953 type.go:168] "Request Body" body=""
	I1210 07:48:21.162875  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:21.163231  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:21.662783  412953 type.go:168] "Request Body" body=""
	I1210 07:48:21.662865  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:21.663168  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:22.162769  412953 type.go:168] "Request Body" body=""
	I1210 07:48:22.162847  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:22.163187  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:22.163245  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:22.662786  412953 type.go:168] "Request Body" body=""
	I1210 07:48:22.662860  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:22.663219  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:23.162659  412953 type.go:168] "Request Body" body=""
	I1210 07:48:23.162735  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:23.163055  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:23.662748  412953 type.go:168] "Request Body" body=""
	I1210 07:48:23.662823  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:23.663195  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:24.162905  412953 type.go:168] "Request Body" body=""
	I1210 07:48:24.162979  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:24.163329  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:24.163388  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:24.662665  412953 type.go:168] "Request Body" body=""
	I1210 07:48:24.662746  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:24.662999  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:25.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:48:25.162867  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:25.163229  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:25.662989  412953 type.go:168] "Request Body" body=""
	I1210 07:48:25.663082  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:25.663400  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:26.162934  412953 type.go:168] "Request Body" body=""
	I1210 07:48:26.163003  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:26.163282  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:26.662757  412953 type.go:168] "Request Body" body=""
	I1210 07:48:26.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:26.663211  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:26.663268  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:27.162792  412953 type.go:168] "Request Body" body=""
	I1210 07:48:27.162873  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:27.163238  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:27.662787  412953 type.go:168] "Request Body" body=""
	I1210 07:48:27.662857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:27.663143  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:28.162822  412953 type.go:168] "Request Body" body=""
	I1210 07:48:28.162896  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:28.163266  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:28.662851  412953 type.go:168] "Request Body" body=""
	I1210 07:48:28.662926  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:28.663274  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:28.663332  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:29.162716  412953 type.go:168] "Request Body" body=""
	I1210 07:48:29.162788  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:29.163115  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:29.662940  412953 type.go:168] "Request Body" body=""
	I1210 07:48:29.663035  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:29.663342  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:30.162806  412953 type.go:168] "Request Body" body=""
	I1210 07:48:30.162885  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:30.163215  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:30.663057  412953 type.go:168] "Request Body" body=""
	I1210 07:48:30.663131  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:30.663453  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:30.663521  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:31.162771  412953 type.go:168] "Request Body" body=""
	I1210 07:48:31.162848  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:31.163188  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:31.662737  412953 type.go:168] "Request Body" body=""
	I1210 07:48:31.662810  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:31.663134  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:32.162725  412953 type.go:168] "Request Body" body=""
	I1210 07:48:32.162795  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:32.163156  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:32.662781  412953 type.go:168] "Request Body" body=""
	I1210 07:48:32.662857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:32.663205  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:33.162800  412953 type.go:168] "Request Body" body=""
	I1210 07:48:33.162878  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:33.163211  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:33.163260  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:33.662739  412953 type.go:168] "Request Body" body=""
	I1210 07:48:33.662809  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:33.663092  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:34.162756  412953 type.go:168] "Request Body" body=""
	I1210 07:48:34.162829  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:34.163174  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:34.663106  412953 type.go:168] "Request Body" body=""
	I1210 07:48:34.663176  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:34.663513  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:35.163270  412953 type.go:168] "Request Body" body=""
	I1210 07:48:35.163336  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:35.163605  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:35.163645  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:35.663599  412953 type.go:168] "Request Body" body=""
	I1210 07:48:35.663677  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:35.663980  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:36.162698  412953 type.go:168] "Request Body" body=""
	I1210 07:48:36.162778  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:36.163088  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:36.662771  412953 type.go:168] "Request Body" body=""
	I1210 07:48:36.662845  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:36.663185  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:37.162797  412953 type.go:168] "Request Body" body=""
	I1210 07:48:37.162873  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:37.163228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:37.662944  412953 type.go:168] "Request Body" body=""
	I1210 07:48:37.663046  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:37.663363  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:37.663421  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:38.162719  412953 type.go:168] "Request Body" body=""
	I1210 07:48:38.162806  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:38.163087  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:38.662798  412953 type.go:168] "Request Body" body=""
	I1210 07:48:38.662885  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:38.663268  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:39.162981  412953 type.go:168] "Request Body" body=""
	I1210 07:48:39.163079  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:39.163418  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:39.662892  412953 type.go:168] "Request Body" body=""
	I1210 07:48:39.662959  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:39.663228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:40.162925  412953 type.go:168] "Request Body" body=""
	I1210 07:48:40.163032  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:40.163374  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:40.163433  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:40.663119  412953 type.go:168] "Request Body" body=""
	I1210 07:48:40.663199  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:40.663541  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:41.163268  412953 type.go:168] "Request Body" body=""
	I1210 07:48:41.163348  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:41.163613  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:41.663378  412953 type.go:168] "Request Body" body=""
	I1210 07:48:41.663451  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:41.663768  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:42.163629  412953 type.go:168] "Request Body" body=""
	I1210 07:48:42.163723  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:42.164102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:42.164166  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:42.662641  412953 type.go:168] "Request Body" body=""
	I1210 07:48:42.662719  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:42.663050  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:43.162792  412953 type.go:168] "Request Body" body=""
	I1210 07:48:43.162873  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:43.163217  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:43.662974  412953 type.go:168] "Request Body" body=""
	I1210 07:48:43.663077  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:43.663400  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:44.162713  412953 type.go:168] "Request Body" body=""
	I1210 07:48:44.162788  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:44.163107  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:44.663097  412953 type.go:168] "Request Body" body=""
	I1210 07:48:44.663173  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:44.663538  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:44.663598  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:45.163413  412953 type.go:168] "Request Body" body=""
	I1210 07:48:45.163521  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:45.163972  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:45.662711  412953 type.go:168] "Request Body" body=""
	I1210 07:48:45.662787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:45.663074  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:46.162828  412953 type.go:168] "Request Body" body=""
	I1210 07:48:46.162907  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:46.163281  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:46.663004  412953 type.go:168] "Request Body" body=""
	I1210 07:48:46.663101  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:46.663416  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:47.162720  412953 type.go:168] "Request Body" body=""
	I1210 07:48:47.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:47.163096  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:47.163144  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:47.662695  412953 type.go:168] "Request Body" body=""
	I1210 07:48:47.662772  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:47.663201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:48.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:48:48.162861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:48.163200  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:48.662706  412953 type.go:168] "Request Body" body=""
	I1210 07:48:48.662780  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:48.663092  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:49.162753  412953 type.go:168] "Request Body" body=""
	I1210 07:48:49.162836  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:49.163183  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:49.163230  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:49.662751  412953 type.go:168] "Request Body" body=""
	I1210 07:48:49.662831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:49.663185  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:50.162871  412953 type.go:168] "Request Body" body=""
	I1210 07:48:50.162945  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:50.163286  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:50.663056  412953 type.go:168] "Request Body" body=""
	I1210 07:48:50.663136  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:50.663481  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:51.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:48:51.162872  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:51.163240  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:51.163304  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:51.662756  412953 type.go:168] "Request Body" body=""
	I1210 07:48:51.662828  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:51.663133  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:52.162821  412953 type.go:168] "Request Body" body=""
	I1210 07:48:52.162901  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:52.163251  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:52.662959  412953 type.go:168] "Request Body" body=""
	I1210 07:48:52.663046  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:52.663381  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:53.163082  412953 type.go:168] "Request Body" body=""
	I1210 07:48:53.163172  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:53.163460  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:53.163528  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:53.663234  412953 type.go:168] "Request Body" body=""
	I1210 07:48:53.663307  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:53.663639  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:54.163275  412953 type.go:168] "Request Body" body=""
	I1210 07:48:54.163349  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:54.163662  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:54.663667  412953 type.go:168] "Request Body" body=""
	I1210 07:48:54.663741  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:54.663998  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:55.162712  412953 type.go:168] "Request Body" body=""
	I1210 07:48:55.162787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:55.163138  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:55.663042  412953 type.go:168] "Request Body" body=""
	I1210 07:48:55.663118  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:55.663430  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:55.663490  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:56.163120  412953 type.go:168] "Request Body" body=""
	I1210 07:48:56.163192  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:56.163444  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:56.663167  412953 type.go:168] "Request Body" body=""
	I1210 07:48:56.663247  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:56.663592  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:57.163414  412953 type.go:168] "Request Body" body=""
	I1210 07:48:57.163491  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:57.163814  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:57.663518  412953 type.go:168] "Request Body" body=""
	I1210 07:48:57.663593  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:57.663859  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:57.663899  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:58.163700  412953 type.go:168] "Request Body" body=""
	I1210 07:48:58.163775  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:58.164099  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:58.662798  412953 type.go:168] "Request Body" body=""
	I1210 07:48:58.662883  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:58.663263  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:59.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:48:59.162788  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:59.163088  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:59.662782  412953 type.go:168] "Request Body" body=""
	I1210 07:48:59.662866  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:59.663288  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:00.162837  412953 type.go:168] "Request Body" body=""
	I1210 07:49:00.162924  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:00.163302  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:00.163360  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:00.663222  412953 type.go:168] "Request Body" body=""
	I1210 07:49:00.663294  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:00.663553  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:01.163418  412953 type.go:168] "Request Body" body=""
	I1210 07:49:01.163505  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:01.163868  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:01.663688  412953 type.go:168] "Request Body" body=""
	I1210 07:49:01.663772  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:01.664129  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:02.162731  412953 type.go:168] "Request Body" body=""
	I1210 07:49:02.162806  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:02.163127  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:02.662770  412953 type.go:168] "Request Body" body=""
	I1210 07:49:02.662851  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:02.663217  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:02.663293  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:03.162959  412953 type.go:168] "Request Body" body=""
	I1210 07:49:03.163058  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:03.163386  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:03.663598  412953 type.go:168] "Request Body" body=""
	I1210 07:49:03.663674  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:03.663952  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:04.163712  412953 type.go:168] "Request Body" body=""
	I1210 07:49:04.163781  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:04.164105  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:04.663090  412953 type.go:168] "Request Body" body=""
	I1210 07:49:04.663166  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:04.663491  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:04.663550  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:05.163267  412953 type.go:168] "Request Body" body=""
	I1210 07:49:05.163335  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:05.163606  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:05.663590  412953 type.go:168] "Request Body" body=""
	I1210 07:49:05.663661  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:05.663988  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:06.162730  412953 type.go:168] "Request Body" body=""
	I1210 07:49:06.162809  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:06.163147  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:06.662702  412953 type.go:168] "Request Body" body=""
	I1210 07:49:06.662772  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:06.663090  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:07.162769  412953 type.go:168] "Request Body" body=""
	I1210 07:49:07.162842  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:07.163224  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:07.163284  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:07.662818  412953 type.go:168] "Request Body" body=""
	I1210 07:49:07.662894  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:07.663261  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:08.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:49:08.162860  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:08.163175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:08.662740  412953 type.go:168] "Request Body" body=""
	I1210 07:49:08.662817  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:08.663150  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:09.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:49:09.162845  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:09.163192  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:09.662749  412953 type.go:168] "Request Body" body=""
	I1210 07:49:09.662818  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:09.663142  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:09.663197  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:10.162737  412953 type.go:168] "Request Body" body=""
	I1210 07:49:10.162814  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:10.163183  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:10.662970  412953 type.go:168] "Request Body" body=""
	I1210 07:49:10.663057  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:10.663346  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:11.162920  412953 type.go:168] "Request Body" body=""
	I1210 07:49:11.162997  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:11.163332  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:11.662882  412953 type.go:168] "Request Body" body=""
	I1210 07:49:11.662959  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:11.663292  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:11.663351  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:12.163033  412953 type.go:168] "Request Body" body=""
	I1210 07:49:12.163107  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:12.163439  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:12.663269  412953 type.go:168] "Request Body" body=""
	I1210 07:49:12.663343  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:12.663599  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:13.163380  412953 type.go:168] "Request Body" body=""
	I1210 07:49:13.163458  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:13.163773  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:13.663450  412953 type.go:168] "Request Body" body=""
	I1210 07:49:13.663530  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:13.663864  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:13.663928  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:14.163662  412953 type.go:168] "Request Body" body=""
	I1210 07:49:14.163735  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:14.163995  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:14.663028  412953 type.go:168] "Request Body" body=""
	I1210 07:49:14.663102  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:14.663407  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:15.162751  412953 type.go:168] "Request Body" body=""
	I1210 07:49:15.162828  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:15.163189  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:15.662925  412953 type.go:168] "Request Body" body=""
	I1210 07:49:15.662994  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:15.663281  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:16.162756  412953 type.go:168] "Request Body" body=""
	I1210 07:49:16.162841  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:16.163429  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:16.163485  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:16.663183  412953 type.go:168] "Request Body" body=""
	I1210 07:49:16.663269  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:16.663642  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:17.163441  412953 type.go:168] "Request Body" body=""
	I1210 07:49:17.163510  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:17.163771  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:17.663499  412953 type.go:168] "Request Body" body=""
	I1210 07:49:17.663586  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:17.663939  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:18.163578  412953 type.go:168] "Request Body" body=""
	I1210 07:49:18.163666  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:18.164004  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:18.164061  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:18.662710  412953 type.go:168] "Request Body" body=""
	I1210 07:49:18.662778  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:18.663116  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:19.162846  412953 type.go:168] "Request Body" body=""
	I1210 07:49:19.162918  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:19.163279  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:19.662981  412953 type.go:168] "Request Body" body=""
	I1210 07:49:19.663079  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:19.663449  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:20.163115  412953 type.go:168] "Request Body" body=""
	I1210 07:49:20.163185  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:20.163485  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:20.662991  412953 type.go:168] "Request Body" body=""
	I1210 07:49:20.663087  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:20.663401  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:20.663459  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:21.163107  412953 type.go:168] "Request Body" body=""
	I1210 07:49:21.163187  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:21.163503  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:21.663312  412953 type.go:168] "Request Body" body=""
	I1210 07:49:21.663390  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:21.663648  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:22.163403  412953 type.go:168] "Request Body" body=""
	I1210 07:49:22.163478  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:22.163806  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:22.663492  412953 type.go:168] "Request Body" body=""
	I1210 07:49:22.663572  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:22.663904  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:22.663956  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:23.163518  412953 type.go:168] "Request Body" body=""
	I1210 07:49:23.163585  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:23.163836  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:23.663583  412953 type.go:168] "Request Body" body=""
	I1210 07:49:23.663658  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:23.663963  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:24.162717  412953 type.go:168] "Request Body" body=""
	I1210 07:49:24.162795  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:24.163164  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:24.663159  412953 type.go:168] "Request Body" body=""
	I1210 07:49:24.663235  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:24.663577  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:25.163338  412953 type.go:168] "Request Body" body=""
	I1210 07:49:25.163416  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:25.163741  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:25.163799  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:25.663562  412953 type.go:168] "Request Body" body=""
	I1210 07:49:25.663632  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:25.663965  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:26.162670  412953 type.go:168] "Request Body" body=""
	I1210 07:49:26.162742  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:26.162992  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:26.662713  412953 type.go:168] "Request Body" body=""
	I1210 07:49:26.662787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:26.663115  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:27.162844  412953 type.go:168] "Request Body" body=""
	I1210 07:49:27.162918  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:27.163264  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:27.662706  412953 type.go:168] "Request Body" body=""
	I1210 07:49:27.662777  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:27.663066  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:27.663116  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:28.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:49:28.162842  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:28.163197  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:28.662785  412953 type.go:168] "Request Body" body=""
	I1210 07:49:28.662870  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:28.663195  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:29.162663  412953 type.go:168] "Request Body" body=""
	I1210 07:49:29.162733  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:29.163065  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:29.662764  412953 type.go:168] "Request Body" body=""
	I1210 07:49:29.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:29.663210  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:29.663268  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:30.162965  412953 type.go:168] "Request Body" body=""
	I1210 07:49:30.163064  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:30.163387  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:30.663348  412953 type.go:168] "Request Body" body=""
	I1210 07:49:30.663415  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:30.663681  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:31.163525  412953 type.go:168] "Request Body" body=""
	I1210 07:49:31.163602  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:31.163937  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:31.662666  412953 type.go:168] "Request Body" body=""
	I1210 07:49:31.662743  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:31.663114  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:32.162802  412953 type.go:168] "Request Body" body=""
	I1210 07:49:32.162874  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:32.163205  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:32.163272  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:32.662932  412953 type.go:168] "Request Body" body=""
	I1210 07:49:32.663029  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:32.663371  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:33.162797  412953 type.go:168] "Request Body" body=""
	I1210 07:49:33.162872  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:33.163215  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:33.662690  412953 type.go:168] "Request Body" body=""
	I1210 07:49:33.662765  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:33.663040  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:34.162731  412953 type.go:168] "Request Body" body=""
	I1210 07:49:34.162811  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:34.163174  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:34.663120  412953 type.go:168] "Request Body" body=""
	I1210 07:49:34.663195  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:34.663560  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:34.663615  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:35.163345  412953 type.go:168] "Request Body" body=""
	I1210 07:49:35.163419  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:35.163698  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:35.663698  412953 type.go:168] "Request Body" body=""
	I1210 07:49:35.663775  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:35.664098  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:36.162816  412953 type.go:168] "Request Body" body=""
	I1210 07:49:36.162893  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:36.163232  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:36.662691  412953 type.go:168] "Request Body" body=""
	I1210 07:49:36.662766  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:36.663041  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:37.162771  412953 type.go:168] "Request Body" body=""
	I1210 07:49:37.162853  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:37.163191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:37.163246  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:37.662782  412953 type.go:168] "Request Body" body=""
	I1210 07:49:37.662863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:37.663181  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:38.162784  412953 type.go:168] "Request Body" body=""
	I1210 07:49:38.162866  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:38.163339  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:38.663026  412953 type.go:168] "Request Body" body=""
	I1210 07:49:38.663107  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:38.663399  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:39.162747  412953 type.go:168] "Request Body" body=""
	I1210 07:49:39.162828  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:39.163215  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:39.163277  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:39.662785  412953 type.go:168] "Request Body" body=""
	I1210 07:49:39.662860  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:39.663137  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:40.162787  412953 type.go:168] "Request Body" body=""
	I1210 07:49:40.162874  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:40.163221  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:40.663148  412953 type.go:168] "Request Body" body=""
	I1210 07:49:40.663236  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:40.663586  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:41.163335  412953 type.go:168] "Request Body" body=""
	I1210 07:49:41.163407  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:41.163738  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:41.163786  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:41.663533  412953 type.go:168] "Request Body" body=""
	I1210 07:49:41.663607  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:41.663906  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:42.163638  412953 type.go:168] "Request Body" body=""
	I1210 07:49:42.163723  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:42.164115  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:42.663519  412953 type.go:168] "Request Body" body=""
	I1210 07:49:42.663591  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:42.663894  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:43.163685  412953 type.go:168] "Request Body" body=""
	I1210 07:49:43.163760  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:43.164112  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:43.164166  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:43.662835  412953 type.go:168] "Request Body" body=""
	I1210 07:49:43.662918  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:43.663257  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:44.162716  412953 type.go:168] "Request Body" body=""
	I1210 07:49:44.162795  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:44.163078  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:44.663092  412953 type.go:168] "Request Body" body=""
	I1210 07:49:44.663175  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:44.663483  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:45.163326  412953 type.go:168] "Request Body" body=""
	I1210 07:49:45.163419  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:45.163779  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:45.662654  412953 type.go:168] "Request Body" body=""
	I1210 07:49:45.662729  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:45.663065  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:45.663117  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:46.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:49:46.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:46.163223  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:46.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:49:46.662868  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:46.663236  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:47.162751  412953 type.go:168] "Request Body" body=""
	I1210 07:49:47.162831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:47.163170  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:47.662766  412953 type.go:168] "Request Body" body=""
	I1210 07:49:47.662844  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:47.663198  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:47.663257  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:48.162790  412953 type.go:168] "Request Body" body=""
	I1210 07:49:48.162890  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:48.163263  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:48.662709  412953 type.go:168] "Request Body" body=""
	I1210 07:49:48.662776  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:48.663061  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:49.162802  412953 type.go:168] "Request Body" body=""
	I1210 07:49:49.162894  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:49.163340  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:49.662794  412953 type.go:168] "Request Body" body=""
	I1210 07:49:49.662875  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:49.663233  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:49.663295  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:50.163641  412953 type.go:168] "Request Body" body=""
	I1210 07:49:50.163727  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:50.163987  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:50.663073  412953 type.go:168] "Request Body" body=""
	I1210 07:49:50.663155  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:50.663511  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:51.163122  412953 type.go:168] "Request Body" body=""
	I1210 07:49:51.163206  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:51.163540  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:51.663260  412953 type.go:168] "Request Body" body=""
	I1210 07:49:51.663334  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:51.663585  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:51.663641  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:52.163471  412953 type.go:168] "Request Body" body=""
	I1210 07:49:52.163547  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:52.163896  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:52.663720  412953 type.go:168] "Request Body" body=""
	I1210 07:49:52.663796  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:52.664128  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:53.162655  412953 type.go:168] "Request Body" body=""
	I1210 07:49:53.162729  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:53.162984  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:53.662712  412953 type.go:168] "Request Body" body=""
	I1210 07:49:53.662785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:53.663162  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:54.162732  412953 type.go:168] "Request Body" body=""
	I1210 07:49:54.162812  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:54.163175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:54.163230  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:54.663007  412953 type.go:168] "Request Body" body=""
	I1210 07:49:54.663092  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:54.663348  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:55.162765  412953 type.go:168] "Request Body" body=""
	I1210 07:49:55.162839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:55.163203  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:55.662783  412953 type.go:168] "Request Body" body=""
	I1210 07:49:55.662865  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:55.663208  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:56.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:49:56.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:56.163134  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:56.662828  412953 type.go:168] "Request Body" body=""
	I1210 07:49:56.662903  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:56.663245  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:56.663302  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:57.162993  412953 type.go:168] "Request Body" body=""
	I1210 07:49:57.163092  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:57.163459  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:57.663133  412953 type.go:168] "Request Body" body=""
	I1210 07:49:57.663213  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:57.663561  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:58.163311  412953 type.go:168] "Request Body" body=""
	I1210 07:49:58.163387  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:58.163735  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:58.663518  412953 type.go:168] "Request Body" body=""
	I1210 07:49:58.663591  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:58.663925  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:58.663979  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:59.163560  412953 type.go:168] "Request Body" body=""
	I1210 07:49:59.163638  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:59.163958  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:59.662670  412953 type.go:168] "Request Body" body=""
	I1210 07:49:59.662749  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:59.663105  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:00.162806  412953 type.go:168] "Request Body" body=""
	I1210 07:50:00.162893  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:00.163244  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:00.663460  412953 type.go:168] "Request Body" body=""
	I1210 07:50:00.663540  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:00.664068  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:00.664196  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:01.162803  412953 type.go:168] "Request Body" body=""
	I1210 07:50:01.162896  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:01.163273  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:01.662786  412953 type.go:168] "Request Body" body=""
	I1210 07:50:01.662862  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:01.663242  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:02.163061  412953 type.go:168] "Request Body" body=""
	I1210 07:50:02.163133  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:02.163407  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:02.662773  412953 type.go:168] "Request Body" body=""
	I1210 07:50:02.662847  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:02.663177  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:03.162915  412953 type.go:168] "Request Body" body=""
	I1210 07:50:03.162990  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:03.163330  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:03.163387  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:03.662715  412953 type.go:168] "Request Body" body=""
	I1210 07:50:03.662782  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:03.663054  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:04.162790  412953 type.go:168] "Request Body" body=""
	I1210 07:50:04.162866  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:04.163214  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:04.663178  412953 type.go:168] "Request Body" body=""
	I1210 07:50:04.663273  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:04.663624  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:05.163424  412953 type.go:168] "Request Body" body=""
	I1210 07:50:05.163513  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:05.163807  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:05.163852  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:05.662744  412953 type.go:168] "Request Body" body=""
	I1210 07:50:05.662830  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:05.663225  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:06.162951  412953 type.go:168] "Request Body" body=""
	I1210 07:50:06.163056  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:06.163423  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:06.662712  412953 type.go:168] "Request Body" body=""
	I1210 07:50:06.662782  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:06.663083  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:07.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:50:07.162850  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:07.163201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:07.662893  412953 type.go:168] "Request Body" body=""
	I1210 07:50:07.662969  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:07.663309  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:07.663366  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:08.162691  412953 type.go:168] "Request Body" body=""
	I1210 07:50:08.162763  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:08.163063  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:08.662792  412953 type.go:168] "Request Body" body=""
	I1210 07:50:08.662868  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:08.663218  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:09.162919  412953 type.go:168] "Request Body" body=""
	I1210 07:50:09.163000  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:09.163347  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:09.662690  412953 type.go:168] "Request Body" body=""
	I1210 07:50:09.662770  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:09.663051  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:10.162778  412953 type.go:168] "Request Body" body=""
	I1210 07:50:10.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:10.163191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:10.163249  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:10.662950  412953 type.go:168] "Request Body" body=""
	I1210 07:50:10.663039  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:10.663349  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:11.162723  412953 type.go:168] "Request Body" body=""
	I1210 07:50:11.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:11.163072  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:11.662754  412953 type.go:168] "Request Body" body=""
	I1210 07:50:11.662830  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:11.663172  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:12.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:50:12.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:12.163253  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:12.163314  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:12.662721  412953 type.go:168] "Request Body" body=""
	I1210 07:50:12.662806  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:12.663135  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:13.162827  412953 type.go:168] "Request Body" body=""
	I1210 07:50:13.162909  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:13.163270  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:13.662849  412953 type.go:168] "Request Body" body=""
	I1210 07:50:13.662924  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:13.663237  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:14.162709  412953 type.go:168] "Request Body" body=""
	I1210 07:50:14.162777  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:14.163050  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:14.663072  412953 type.go:168] "Request Body" body=""
	I1210 07:50:14.663154  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:14.663480  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:14.663533  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:15.163298  412953 type.go:168] "Request Body" body=""
	I1210 07:50:15.163374  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:15.163686  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:15.662909  412953 node_ready.go:38] duration metric: took 6m0.000357427s for node "functional-314220" to be "Ready" ...
	I1210 07:50:15.669570  412953 out.go:203] 
	W1210 07:50:15.672493  412953 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 07:50:15.672574  412953 out.go:285] * 
	* 
	W1210 07:50:15.674736  412953 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:50:15.677520  412953 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-314220 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m5.808695681s for "functional-314220" cluster.
I1210 07:50:16.253217  378528 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-314220
helpers_test.go:244: (dbg) docker inspect functional-314220:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	        "Created": "2025-12-10T07:35:53.582567333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 407506,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:35:53.640615938Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hostname",
	        "HostsPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hosts",
	        "LogPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30-json.log",
	        "Name": "/functional-314220",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-314220:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-314220",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	                "LowerDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35-init/diff:/var/lib/docker/overlay2/888a54fd4518421bb2e30bc2f0825f232bffa2f260991e8ba288270662a6554b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-314220",
	                "Source": "/var/lib/docker/volumes/functional-314220/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-314220",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-314220",
	                "name.minikube.sigs.k8s.io": "functional-314220",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a7384df8731cf8cd48ab1dfb3eb79fa2c44cd426e6255cad7345dab2e19d0bfd",
	            "SandboxKey": "/var/run/docker/netns/a7384df8731c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-314220": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:6b:8c:59:cb:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "854df014d6f98ea92f9938b53e3af35a6df7a455941ebb6e97efa575a4d35230",
	                    "EndpointID": "285d153b9d616bc3d729ab764c8a3d3c5b28bb20787fa60a80cbf50869c0c24b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-314220",
	                        "82cb4926192d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220: exit status 2 (347.622466ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-446865 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ mount          │ -p functional-446865 /tmp/TestFunctionalparallelMountCmdspecific-port2634105139/001:/mount-9p --alsologtostderr -v=1 --port 46464                 │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ ssh            │ functional-446865 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ ssh            │ functional-446865 ssh -- ls -la /mount-9p                                                                                                         │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ ssh            │ functional-446865 ssh sudo umount -f /mount-9p                                                                                                    │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ mount          │ -p functional-446865 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1456434196/001:/mount1 --alsologtostderr -v=1                                │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ mount          │ -p functional-446865 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1456434196/001:/mount2 --alsologtostderr -v=1                                │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ ssh            │ functional-446865 ssh findmnt -T /mount1                                                                                                          │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ mount          │ -p functional-446865 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1456434196/001:/mount3 --alsologtostderr -v=1                                │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ ssh            │ functional-446865 ssh findmnt -T /mount2                                                                                                          │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ ssh            │ functional-446865 ssh findmnt -T /mount3                                                                                                          │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ mount          │ -p functional-446865 --kill=true                                                                                                                  │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ update-context │ functional-446865 update-context --alsologtostderr -v=2                                                                                           │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ update-context │ functional-446865 update-context --alsologtostderr -v=2                                                                                           │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ update-context │ functional-446865 update-context --alsologtostderr -v=2                                                                                           │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image          │ functional-446865 image ls --format short --alsologtostderr                                                                                       │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image          │ functional-446865 image ls --format yaml --alsologtostderr                                                                                        │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ ssh            │ functional-446865 ssh pgrep buildkitd                                                                                                             │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ image          │ functional-446865 image build -t localhost/my-image:functional-446865 testdata/build --alsologtostderr                                            │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image          │ functional-446865 image ls --format json --alsologtostderr                                                                                        │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image          │ functional-446865 image ls                                                                                                                        │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image          │ functional-446865 image ls --format table --alsologtostderr                                                                                       │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ delete         │ -p functional-446865                                                                                                                              │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ start          │ -p functional-314220 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ start          │ -p functional-314220 --alsologtostderr -v=8                                                                                                       │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:44 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:44:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:44:10.487397  412953 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:44:10.487521  412953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:44:10.487566  412953 out.go:374] Setting ErrFile to fd 2...
	I1210 07:44:10.487572  412953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:44:10.487834  412953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:44:10.488205  412953 out.go:368] Setting JSON to false
	I1210 07:44:10.489052  412953 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8801,"bootTime":1765343850,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:44:10.489127  412953 start.go:143] virtualization:  
	I1210 07:44:10.492628  412953 out.go:179] * [functional-314220] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:44:10.495451  412953 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:44:10.495581  412953 notify.go:221] Checking for updates...
	I1210 07:44:10.501282  412953 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:44:10.504171  412953 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:10.506968  412953 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 07:44:10.509885  412953 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:44:10.512742  412953 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:44:10.516079  412953 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:44:10.516221  412953 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:44:10.539133  412953 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:44:10.539253  412953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:44:10.606789  412953 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:44:10.597593273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:44:10.606896  412953 docker.go:319] overlay module found
	I1210 07:44:10.611915  412953 out.go:179] * Using the docker driver based on existing profile
	I1210 07:44:10.614862  412953 start.go:309] selected driver: docker
	I1210 07:44:10.614885  412953 start.go:927] validating driver "docker" against &{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:44:10.614994  412953 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:44:10.615113  412953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:44:10.673141  412953 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:44:10.664474897 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:44:10.673572  412953 cni.go:84] Creating CNI manager for ""
	I1210 07:44:10.673631  412953 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:44:10.673679  412953 start.go:353] cluster config:
	{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:44:10.678679  412953 out.go:179] * Starting "functional-314220" primary control-plane node in "functional-314220" cluster
	I1210 07:44:10.681372  412953 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 07:44:10.684277  412953 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:44:10.687267  412953 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:44:10.687329  412953 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1210 07:44:10.687343  412953 cache.go:65] Caching tarball of preloaded images
	I1210 07:44:10.687350  412953 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:44:10.687434  412953 preload.go:238] Found /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1210 07:44:10.687444  412953 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 07:44:10.687550  412953 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/config.json ...
	I1210 07:44:10.707132  412953 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:44:10.707156  412953 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:44:10.707176  412953 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:44:10.707214  412953 start.go:360] acquireMachinesLock for functional-314220: {Name:mk24b69a1578b6f8eb2fecd1b5e1ddb99787d8b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:44:10.707283  412953 start.go:364] duration metric: took 45.104µs to acquireMachinesLock for "functional-314220"
	I1210 07:44:10.707306  412953 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:44:10.707317  412953 fix.go:54] fixHost starting: 
	I1210 07:44:10.707577  412953 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:44:10.723920  412953 fix.go:112] recreateIfNeeded on functional-314220: state=Running err=<nil>
	W1210 07:44:10.723951  412953 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:44:10.727176  412953 out.go:252] * Updating the running docker "functional-314220" container ...
	I1210 07:44:10.727205  412953 machine.go:94] provisionDockerMachine start ...
	I1210 07:44:10.727283  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:10.744553  412953 main.go:143] libmachine: Using SSH client type: native
	I1210 07:44:10.744931  412953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:44:10.744946  412953 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:44:10.878742  412953 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-314220
	
	I1210 07:44:10.878763  412953 ubuntu.go:182] provisioning hostname "functional-314220"
	I1210 07:44:10.878828  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:10.897712  412953 main.go:143] libmachine: Using SSH client type: native
	I1210 07:44:10.898057  412953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:44:10.898077  412953 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-314220 && echo "functional-314220" | sudo tee /etc/hostname
	I1210 07:44:11.052065  412953 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-314220
	
	I1210 07:44:11.052160  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:11.072344  412953 main.go:143] libmachine: Using SSH client type: native
	I1210 07:44:11.072686  412953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:44:11.072703  412953 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-314220' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-314220/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-314220' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:44:11.207289  412953 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:44:11.207317  412953 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-376671/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-376671/.minikube}
	I1210 07:44:11.207348  412953 ubuntu.go:190] setting up certificates
	I1210 07:44:11.207366  412953 provision.go:84] configureAuth start
	I1210 07:44:11.207429  412953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:44:11.224935  412953 provision.go:143] copyHostCerts
	I1210 07:44:11.224978  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem
	I1210 07:44:11.225021  412953 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem, removing ...
	I1210 07:44:11.225032  412953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem
	I1210 07:44:11.225107  412953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem (1082 bytes)
	I1210 07:44:11.225201  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem
	I1210 07:44:11.225224  412953 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem, removing ...
	I1210 07:44:11.225234  412953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem
	I1210 07:44:11.225268  412953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem (1123 bytes)
	I1210 07:44:11.225321  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem
	I1210 07:44:11.225345  412953 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem, removing ...
	I1210 07:44:11.225354  412953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem
	I1210 07:44:11.225380  412953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem (1675 bytes)
	I1210 07:44:11.225441  412953 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem org=jenkins.functional-314220 san=[127.0.0.1 192.168.49.2 functional-314220 localhost minikube]
	I1210 07:44:11.417392  412953 provision.go:177] copyRemoteCerts
	I1210 07:44:11.417460  412953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:44:11.417497  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:11.436410  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:11.535532  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 07:44:11.535603  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:44:11.553463  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 07:44:11.553526  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:44:11.571834  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 07:44:11.571892  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:44:11.590409  412953 provision.go:87] duration metric: took 383.016251ms to configureAuth
	I1210 07:44:11.590435  412953 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:44:11.590614  412953 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:44:11.590731  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:11.608257  412953 main.go:143] libmachine: Using SSH client type: native
	I1210 07:44:11.608571  412953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:44:11.608596  412953 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 07:44:11.906129  412953 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 07:44:11.906170  412953 machine.go:97] duration metric: took 1.17895657s to provisionDockerMachine
	I1210 07:44:11.906181  412953 start.go:293] postStartSetup for "functional-314220" (driver="docker")
	I1210 07:44:11.906194  412953 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:44:11.906264  412953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:44:11.906303  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:11.923285  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:12.019543  412953 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:44:12.023176  412953 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 07:44:12.023203  412953 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 07:44:12.023208  412953 command_runner.go:130] > VERSION_ID="12"
	I1210 07:44:12.023217  412953 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 07:44:12.023222  412953 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 07:44:12.023226  412953 command_runner.go:130] > ID=debian
	I1210 07:44:12.023231  412953 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 07:44:12.023236  412953 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 07:44:12.023245  412953 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 07:44:12.023295  412953 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:44:12.023316  412953 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:44:12.023330  412953 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/addons for local assets ...
	I1210 07:44:12.023386  412953 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/files for local assets ...
	I1210 07:44:12.023472  412953 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> 3785282.pem in /etc/ssl/certs
	I1210 07:44:12.023483  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> /etc/ssl/certs/3785282.pem
	I1210 07:44:12.023563  412953 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts -> hosts in /etc/test/nested/copy/378528
	I1210 07:44:12.023571  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts -> /etc/test/nested/copy/378528/hosts
	I1210 07:44:12.023617  412953 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/378528
	I1210 07:44:12.031659  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 07:44:12.049814  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts --> /etc/test/nested/copy/378528/hosts (40 bytes)
	I1210 07:44:12.067644  412953 start.go:296] duration metric: took 161.447867ms for postStartSetup
	I1210 07:44:12.067748  412953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:44:12.067798  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:12.084856  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:12.184547  412953 command_runner.go:130] > 14%
	I1210 07:44:12.184639  412953 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:44:12.189562  412953 command_runner.go:130] > 169G
	I1210 07:44:12.189589  412953 fix.go:56] duration metric: took 1.4822703s for fixHost
	I1210 07:44:12.189600  412953 start.go:83] releasing machines lock for "functional-314220", held for 1.482305303s
	I1210 07:44:12.189668  412953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:44:12.206193  412953 ssh_runner.go:195] Run: cat /version.json
	I1210 07:44:12.206242  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:12.206484  412953 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:44:12.206547  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:12.229509  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:12.231766  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:12.322395  412953 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765319469-22089", "minikube_version": "v1.37.0", "commit": "3b564f551de69272c9de22efc5b37f8a5b0156c7"}
	I1210 07:44:12.322525  412953 ssh_runner.go:195] Run: systemctl --version
	I1210 07:44:12.409743  412953 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1210 07:44:12.412779  412953 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 07:44:12.412818  412953 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 07:44:12.412894  412953 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 07:44:12.460937  412953 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 07:44:12.466609  412953 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 07:44:12.466697  412953 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:44:12.466802  412953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:44:12.474626  412953 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:44:12.474651  412953 start.go:496] detecting cgroup driver to use...
	I1210 07:44:12.474708  412953 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:44:12.474780  412953 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:44:12.490092  412953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:44:12.503562  412953 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:44:12.503627  412953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:44:12.518840  412953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:44:12.531838  412953 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:44:12.642559  412953 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:44:12.762873  412953 docker.go:234] disabling docker service ...
	I1210 07:44:12.762979  412953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:44:12.778725  412953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:44:12.791652  412953 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:44:12.911705  412953 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:44:13.035394  412953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:44:13.049695  412953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:44:13.065431  412953 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1210 07:44:13.065522  412953 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 07:44:13.065609  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.075381  412953 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 07:44:13.075482  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.085452  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.094855  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.104471  412953 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:44:13.112786  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.121728  412953 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.130205  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.139248  412953 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:44:13.145900  412953 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 07:44:13.147163  412953 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:44:13.154995  412953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:44:13.289205  412953 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 07:44:13.445871  412953 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 07:44:13.446002  412953 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 07:44:13.449677  412953 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1210 07:44:13.449750  412953 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 07:44:13.449774  412953 command_runner.go:130] > Device: 0,72	Inode: 1639        Links: 1
	I1210 07:44:13.449787  412953 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 07:44:13.449793  412953 command_runner.go:130] > Access: 2025-12-10 07:44:13.388138306 +0000
	I1210 07:44:13.449816  412953 command_runner.go:130] > Modify: 2025-12-10 07:44:13.388138306 +0000
	I1210 07:44:13.449826  412953 command_runner.go:130] > Change: 2025-12-10 07:44:13.388138306 +0000
	I1210 07:44:13.449830  412953 command_runner.go:130] >  Birth: -
	I1210 07:44:13.449864  412953 start.go:564] Will wait 60s for crictl version
	I1210 07:44:13.449928  412953 ssh_runner.go:195] Run: which crictl
	I1210 07:44:13.453538  412953 command_runner.go:130] > /usr/local/bin/crictl
	I1210 07:44:13.453678  412953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:44:13.477475  412953 command_runner.go:130] > Version:  0.1.0
	I1210 07:44:13.477498  412953 command_runner.go:130] > RuntimeName:  cri-o
	I1210 07:44:13.477503  412953 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1210 07:44:13.477509  412953 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 07:44:13.477520  412953 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 07:44:13.477602  412953 ssh_runner.go:195] Run: crio --version
	I1210 07:44:13.505751  412953 command_runner.go:130] > crio version 1.34.3
	I1210 07:44:13.505796  412953 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1210 07:44:13.505803  412953 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1210 07:44:13.505808  412953 command_runner.go:130] >    GitTreeState:   dirty
	I1210 07:44:13.505813  412953 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1210 07:44:13.505817  412953 command_runner.go:130] >    GoVersion:      go1.24.6
	I1210 07:44:13.505821  412953 command_runner.go:130] >    Compiler:       gc
	I1210 07:44:13.505826  412953 command_runner.go:130] >    Platform:       linux/arm64
	I1210 07:44:13.505835  412953 command_runner.go:130] >    Linkmode:       static
	I1210 07:44:13.505838  412953 command_runner.go:130] >    BuildTags:
	I1210 07:44:13.505844  412953 command_runner.go:130] >      static
	I1210 07:44:13.505848  412953 command_runner.go:130] >      netgo
	I1210 07:44:13.505852  412953 command_runner.go:130] >      osusergo
	I1210 07:44:13.505859  412953 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1210 07:44:13.505863  412953 command_runner.go:130] >      seccomp
	I1210 07:44:13.505874  412953 command_runner.go:130] >      apparmor
	I1210 07:44:13.505877  412953 command_runner.go:130] >      selinux
	I1210 07:44:13.505881  412953 command_runner.go:130] >    LDFlags:          unknown
	I1210 07:44:13.505886  412953 command_runner.go:130] >    SeccompEnabled:   true
	I1210 07:44:13.505895  412953 command_runner.go:130] >    AppArmorEnabled:  false
	I1210 07:44:13.507701  412953 ssh_runner.go:195] Run: crio --version
	I1210 07:44:13.535170  412953 command_runner.go:130] > crio version 1.34.3
	I1210 07:44:13.535233  412953 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1210 07:44:13.535254  412953 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1210 07:44:13.535275  412953 command_runner.go:130] >    GitTreeState:   dirty
	I1210 07:44:13.535296  412953 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1210 07:44:13.535314  412953 command_runner.go:130] >    GoVersion:      go1.24.6
	I1210 07:44:13.535334  412953 command_runner.go:130] >    Compiler:       gc
	I1210 07:44:13.535358  412953 command_runner.go:130] >    Platform:       linux/arm64
	I1210 07:44:13.535377  412953 command_runner.go:130] >    Linkmode:       static
	I1210 07:44:13.535395  412953 command_runner.go:130] >    BuildTags:
	I1210 07:44:13.535414  412953 command_runner.go:130] >      static
	I1210 07:44:13.535432  412953 command_runner.go:130] >      netgo
	I1210 07:44:13.535451  412953 command_runner.go:130] >      osusergo
	I1210 07:44:13.535471  412953 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1210 07:44:13.535489  412953 command_runner.go:130] >      seccomp
	I1210 07:44:13.535518  412953 command_runner.go:130] >      apparmor
	I1210 07:44:13.535548  412953 command_runner.go:130] >      selinux
	I1210 07:44:13.535566  412953 command_runner.go:130] >    LDFlags:          unknown
	I1210 07:44:13.535590  412953 command_runner.go:130] >    SeccompEnabled:   true
	I1210 07:44:13.535609  412953 command_runner.go:130] >    AppArmorEnabled:  false
	I1210 07:44:13.540516  412953 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 07:44:13.543340  412953 cli_runner.go:164] Run: docker network inspect functional-314220 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:44:13.558881  412953 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 07:44:13.562785  412953 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1210 07:44:13.562964  412953 kubeadm.go:884] updating cluster {Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:44:13.563103  412953 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:44:13.563170  412953 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:44:13.592036  412953 command_runner.go:130] > {
	I1210 07:44:13.592059  412953 command_runner.go:130] >   "images":  [
	I1210 07:44:13.592064  412953 command_runner.go:130] >     {
	I1210 07:44:13.592073  412953 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1210 07:44:13.592083  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592089  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1210 07:44:13.592093  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592096  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592118  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1210 07:44:13.592130  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1210 07:44:13.592138  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592144  412953 command_runner.go:130] >       "size":  "111333938",
	I1210 07:44:13.592154  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592159  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592163  412953 command_runner.go:130] >     },
	I1210 07:44:13.592169  412953 command_runner.go:130] >     {
	I1210 07:44:13.592176  412953 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1210 07:44:13.592183  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592189  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 07:44:13.592192  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592196  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592207  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1210 07:44:13.592217  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1210 07:44:13.592221  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592225  412953 command_runner.go:130] >       "size":  "29037500",
	I1210 07:44:13.592231  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592239  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592246  412953 command_runner.go:130] >     },
	I1210 07:44:13.592249  412953 command_runner.go:130] >     {
	I1210 07:44:13.592255  412953 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 07:44:13.592264  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592269  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 07:44:13.592272  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592278  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592286  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1210 07:44:13.592297  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1210 07:44:13.592300  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592306  412953 command_runner.go:130] >       "size":  "74491780",
	I1210 07:44:13.592311  412953 command_runner.go:130] >       "username":  "nonroot",
	I1210 07:44:13.592317  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592320  412953 command_runner.go:130] >     },
	I1210 07:44:13.592329  412953 command_runner.go:130] >     {
	I1210 07:44:13.592338  412953 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1210 07:44:13.592342  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592354  412953 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1210 07:44:13.592357  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592361  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592374  412953 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1210 07:44:13.592381  412953 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1210 07:44:13.592387  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592391  412953 command_runner.go:130] >       "size":  "60857170",
	I1210 07:44:13.592395  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592401  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.592405  412953 command_runner.go:130] >       },
	I1210 07:44:13.592420  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592424  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592429  412953 command_runner.go:130] >     },
	I1210 07:44:13.592433  412953 command_runner.go:130] >     {
	I1210 07:44:13.592446  412953 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1210 07:44:13.592450  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592457  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1210 07:44:13.592461  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592465  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592474  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1210 07:44:13.592484  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1210 07:44:13.592488  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592494  412953 command_runner.go:130] >       "size":  "84949999",
	I1210 07:44:13.592498  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592522  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.592525  412953 command_runner.go:130] >       },
	I1210 07:44:13.592530  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592538  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592541  412953 command_runner.go:130] >     },
	I1210 07:44:13.592545  412953 command_runner.go:130] >     {
	I1210 07:44:13.592556  412953 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1210 07:44:13.592563  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592569  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1210 07:44:13.592579  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592582  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592591  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1210 07:44:13.592602  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1210 07:44:13.592606  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592616  412953 command_runner.go:130] >       "size":  "72170325",
	I1210 07:44:13.592619  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592623  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.592628  412953 command_runner.go:130] >       },
	I1210 07:44:13.592633  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592639  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592642  412953 command_runner.go:130] >     },
	I1210 07:44:13.592645  412953 command_runner.go:130] >     {
	I1210 07:44:13.592652  412953 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1210 07:44:13.592663  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592669  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1210 07:44:13.592674  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592678  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592691  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1210 07:44:13.592702  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1210 07:44:13.592706  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592712  412953 command_runner.go:130] >       "size":  "74106775",
	I1210 07:44:13.592717  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592723  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592726  412953 command_runner.go:130] >     },
	I1210 07:44:13.592729  412953 command_runner.go:130] >     {
	I1210 07:44:13.592735  412953 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1210 07:44:13.592741  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592747  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1210 07:44:13.592750  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592764  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592772  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1210 07:44:13.592793  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1210 07:44:13.592800  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592804  412953 command_runner.go:130] >       "size":  "49822549",
	I1210 07:44:13.592808  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592817  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.592820  412953 command_runner.go:130] >       },
	I1210 07:44:13.592824  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592830  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592834  412953 command_runner.go:130] >     },
	I1210 07:44:13.592843  412953 command_runner.go:130] >     {
	I1210 07:44:13.592849  412953 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 07:44:13.592853  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592858  412953 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 07:44:13.592866  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592870  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592878  412953 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1210 07:44:13.592888  412953 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1210 07:44:13.592892  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592898  412953 command_runner.go:130] >       "size":  "519884",
	I1210 07:44:13.592902  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592911  412953 command_runner.go:130] >         "value":  "65535"
	I1210 07:44:13.592914  412953 command_runner.go:130] >       },
	I1210 07:44:13.592918  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592924  412953 command_runner.go:130] >       "pinned":  true
	I1210 07:44:13.592927  412953 command_runner.go:130] >     }
	I1210 07:44:13.592932  412953 command_runner.go:130] >   ]
	I1210 07:44:13.592935  412953 command_runner.go:130] > }
	I1210 07:44:13.595219  412953 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:44:13.595245  412953 crio.go:433] Images already preloaded, skipping extraction
	I1210 07:44:13.595305  412953 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:44:13.620833  412953 command_runner.go:130] > {
	I1210 07:44:13.620851  412953 command_runner.go:130] >   "images":  [
	I1210 07:44:13.620856  412953 command_runner.go:130] >     {
	I1210 07:44:13.620865  412953 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1210 07:44:13.620870  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.620884  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1210 07:44:13.620888  412953 command_runner.go:130] >       ],
	I1210 07:44:13.620896  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.620905  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1210 07:44:13.620913  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1210 07:44:13.620917  412953 command_runner.go:130] >       ],
	I1210 07:44:13.620921  412953 command_runner.go:130] >       "size":  "111333938",
	I1210 07:44:13.620925  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.620930  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.620933  412953 command_runner.go:130] >     },
	I1210 07:44:13.620936  412953 command_runner.go:130] >     {
	I1210 07:44:13.620943  412953 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1210 07:44:13.620947  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.620952  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 07:44:13.620955  412953 command_runner.go:130] >       ],
	I1210 07:44:13.620958  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.620966  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1210 07:44:13.620975  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1210 07:44:13.620978  412953 command_runner.go:130] >       ],
	I1210 07:44:13.620982  412953 command_runner.go:130] >       "size":  "29037500",
	I1210 07:44:13.620985  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.620991  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.620994  412953 command_runner.go:130] >     },
	I1210 07:44:13.620997  412953 command_runner.go:130] >     {
	I1210 07:44:13.621003  412953 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 07:44:13.621007  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621012  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 07:44:13.621015  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621019  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621027  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1210 07:44:13.621035  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1210 07:44:13.621038  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621042  412953 command_runner.go:130] >       "size":  "74491780",
	I1210 07:44:13.621046  412953 command_runner.go:130] >       "username":  "nonroot",
	I1210 07:44:13.621049  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621056  412953 command_runner.go:130] >     },
	I1210 07:44:13.621059  412953 command_runner.go:130] >     {
	I1210 07:44:13.621066  412953 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1210 07:44:13.621070  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621075  412953 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1210 07:44:13.621079  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621083  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621091  412953 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1210 07:44:13.621098  412953 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1210 07:44:13.621102  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621105  412953 command_runner.go:130] >       "size":  "60857170",
	I1210 07:44:13.621109  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621113  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.621116  412953 command_runner.go:130] >       },
	I1210 07:44:13.621124  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621128  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621131  412953 command_runner.go:130] >     },
	I1210 07:44:13.621134  412953 command_runner.go:130] >     {
	I1210 07:44:13.621143  412953 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1210 07:44:13.621147  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621152  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1210 07:44:13.621156  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621159  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621167  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1210 07:44:13.621175  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1210 07:44:13.621178  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621182  412953 command_runner.go:130] >       "size":  "84949999",
	I1210 07:44:13.621185  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621189  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.621192  412953 command_runner.go:130] >       },
	I1210 07:44:13.621196  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621199  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621202  412953 command_runner.go:130] >     },
	I1210 07:44:13.621208  412953 command_runner.go:130] >     {
	I1210 07:44:13.621214  412953 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1210 07:44:13.621218  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621224  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1210 07:44:13.621227  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621231  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621239  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1210 07:44:13.621247  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1210 07:44:13.621250  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621255  412953 command_runner.go:130] >       "size":  "72170325",
	I1210 07:44:13.621258  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621262  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.621265  412953 command_runner.go:130] >       },
	I1210 07:44:13.621268  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621272  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621275  412953 command_runner.go:130] >     },
	I1210 07:44:13.621278  412953 command_runner.go:130] >     {
	I1210 07:44:13.621285  412953 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1210 07:44:13.621289  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621294  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1210 07:44:13.621297  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621301  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621309  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1210 07:44:13.621317  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1210 07:44:13.621320  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621324  412953 command_runner.go:130] >       "size":  "74106775",
	I1210 07:44:13.621327  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621331  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621334  412953 command_runner.go:130] >     },
	I1210 07:44:13.621337  412953 command_runner.go:130] >     {
	I1210 07:44:13.621343  412953 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1210 07:44:13.621347  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621352  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1210 07:44:13.621359  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621363  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621371  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1210 07:44:13.621390  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1210 07:44:13.621393  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621397  412953 command_runner.go:130] >       "size":  "49822549",
	I1210 07:44:13.621401  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621404  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.621408  412953 command_runner.go:130] >       },
	I1210 07:44:13.621411  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621415  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621418  412953 command_runner.go:130] >     },
	I1210 07:44:13.621421  412953 command_runner.go:130] >     {
	I1210 07:44:13.621427  412953 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 07:44:13.621431  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621435  412953 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 07:44:13.621438  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621442  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621449  412953 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1210 07:44:13.621456  412953 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1210 07:44:13.621459  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621463  412953 command_runner.go:130] >       "size":  "519884",
	I1210 07:44:13.621466  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621470  412953 command_runner.go:130] >         "value":  "65535"
	I1210 07:44:13.621473  412953 command_runner.go:130] >       },
	I1210 07:44:13.621477  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621481  412953 command_runner.go:130] >       "pinned":  true
	I1210 07:44:13.621483  412953 command_runner.go:130] >     }
	I1210 07:44:13.621486  412953 command_runner.go:130] >   ]
	I1210 07:44:13.621490  412953 command_runner.go:130] > }
	I1210 07:44:13.622855  412953 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:44:13.622877  412953 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:44:13.622884  412953 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1210 07:44:13.622995  412953 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-314220 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:44:13.623104  412953 ssh_runner.go:195] Run: crio config
	I1210 07:44:13.670610  412953 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1210 07:44:13.670640  412953 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1210 07:44:13.670648  412953 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1210 07:44:13.670652  412953 command_runner.go:130] > #
	I1210 07:44:13.670659  412953 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1210 07:44:13.670667  412953 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1210 07:44:13.670674  412953 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1210 07:44:13.670691  412953 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1210 07:44:13.670699  412953 command_runner.go:130] > # reload'.
	I1210 07:44:13.670706  412953 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1210 07:44:13.670713  412953 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1210 07:44:13.670722  412953 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1210 07:44:13.670728  412953 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1210 07:44:13.670733  412953 command_runner.go:130] > [crio]
	I1210 07:44:13.670747  412953 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1210 07:44:13.670755  412953 command_runner.go:130] > # containers images, in this directory.
	I1210 07:44:13.670764  412953 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1210 07:44:13.670774  412953 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1210 07:44:13.670784  412953 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1210 07:44:13.670792  412953 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1210 07:44:13.670799  412953 command_runner.go:130] > # imagestore = ""
	I1210 07:44:13.670805  412953 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1210 07:44:13.670812  412953 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1210 07:44:13.670819  412953 command_runner.go:130] > # storage_driver = "overlay"
	I1210 07:44:13.670826  412953 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1210 07:44:13.670832  412953 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1210 07:44:13.670839  412953 command_runner.go:130] > # storage_option = [
	I1210 07:44:13.670842  412953 command_runner.go:130] > # ]
	I1210 07:44:13.670848  412953 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1210 07:44:13.670854  412953 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1210 07:44:13.670864  412953 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1210 07:44:13.670876  412953 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1210 07:44:13.670886  412953 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1210 07:44:13.670890  412953 command_runner.go:130] > # always happen on a node reboot
	I1210 07:44:13.670897  412953 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1210 07:44:13.670908  412953 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1210 07:44:13.670916  412953 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1210 07:44:13.670921  412953 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1210 07:44:13.670927  412953 command_runner.go:130] > # version_file_persist = ""
	I1210 07:44:13.670948  412953 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1210 07:44:13.670957  412953 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1210 07:44:13.670965  412953 command_runner.go:130] > # internal_wipe = true
	I1210 07:44:13.670973  412953 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1210 07:44:13.670982  412953 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1210 07:44:13.670986  412953 command_runner.go:130] > # internal_repair = true
	I1210 07:44:13.670992  412953 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1210 07:44:13.671000  412953 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1210 07:44:13.671005  412953 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1210 07:44:13.671033  412953 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1210 07:44:13.671041  412953 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1210 07:44:13.671047  412953 command_runner.go:130] > [crio.api]
	I1210 07:44:13.671052  412953 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1210 07:44:13.671057  412953 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1210 07:44:13.671064  412953 command_runner.go:130] > # IP address on which the stream server will listen.
	I1210 07:44:13.671297  412953 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1210 07:44:13.671315  412953 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1210 07:44:13.671322  412953 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1210 07:44:13.671326  412953 command_runner.go:130] > # stream_port = "0"
	I1210 07:44:13.671356  412953 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1210 07:44:13.671366  412953 command_runner.go:130] > # stream_enable_tls = false
	I1210 07:44:13.671373  412953 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1210 07:44:13.671558  412953 command_runner.go:130] > # stream_idle_timeout = ""
	I1210 07:44:13.671575  412953 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1210 07:44:13.671582  412953 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1210 07:44:13.671587  412953 command_runner.go:130] > # stream_tls_cert = ""
	I1210 07:44:13.671593  412953 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1210 07:44:13.671617  412953 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1210 07:44:13.671819  412953 command_runner.go:130] > # stream_tls_key = ""
	I1210 07:44:13.671835  412953 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1210 07:44:13.671853  412953 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1210 07:44:13.671864  412953 command_runner.go:130] > # automatically pick up the changes.
	I1210 07:44:13.671868  412953 command_runner.go:130] > # stream_tls_ca = ""
	I1210 07:44:13.671887  412953 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 07:44:13.671896  412953 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1210 07:44:13.671903  412953 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 07:44:13.672102  412953 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1210 07:44:13.672121  412953 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1210 07:44:13.672128  412953 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1210 07:44:13.672131  412953 command_runner.go:130] > [crio.runtime]
	I1210 07:44:13.672137  412953 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1210 07:44:13.672162  412953 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1210 07:44:13.672172  412953 command_runner.go:130] > # "nofile=1024:2048"
	I1210 07:44:13.672179  412953 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1210 07:44:13.672183  412953 command_runner.go:130] > # default_ulimits = [
	I1210 07:44:13.672188  412953 command_runner.go:130] > # ]
	I1210 07:44:13.672195  412953 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1210 07:44:13.672201  412953 command_runner.go:130] > # no_pivot = false
	I1210 07:44:13.672207  412953 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1210 07:44:13.672214  412953 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1210 07:44:13.672219  412953 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1210 07:44:13.672235  412953 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1210 07:44:13.672241  412953 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1210 07:44:13.672248  412953 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 07:44:13.672445  412953 command_runner.go:130] > # conmon = ""
	I1210 07:44:13.672461  412953 command_runner.go:130] > # Cgroup setting for conmon
	I1210 07:44:13.672469  412953 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1210 07:44:13.672473  412953 command_runner.go:130] > conmon_cgroup = "pod"
	I1210 07:44:13.672480  412953 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1210 07:44:13.672502  412953 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1210 07:44:13.672522  412953 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 07:44:13.672875  412953 command_runner.go:130] > # conmon_env = [
	I1210 07:44:13.672888  412953 command_runner.go:130] > # ]
	I1210 07:44:13.672895  412953 command_runner.go:130] > # Additional environment variables to set for all the
	I1210 07:44:13.672900  412953 command_runner.go:130] > # containers. These are overridden if set in the
	I1210 07:44:13.672907  412953 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1210 07:44:13.673114  412953 command_runner.go:130] > # default_env = [
	I1210 07:44:13.673128  412953 command_runner.go:130] > # ]
	I1210 07:44:13.673149  412953 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1210 07:44:13.673177  412953 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1210 07:44:13.673192  412953 command_runner.go:130] > # selinux = false
	I1210 07:44:13.673200  412953 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1210 07:44:13.673211  412953 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1210 07:44:13.673216  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.673222  412953 command_runner.go:130] > # seccomp_profile = ""
	I1210 07:44:13.673228  412953 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1210 07:44:13.673240  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.673428  412953 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1210 07:44:13.673444  412953 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1210 07:44:13.673452  412953 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1210 07:44:13.673459  412953 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1210 07:44:13.673478  412953 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1210 07:44:13.673488  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.673492  412953 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1210 07:44:13.673498  412953 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1210 07:44:13.673505  412953 command_runner.go:130] > # the cgroup blockio controller.
	I1210 07:44:13.673509  412953 command_runner.go:130] > # blockio_config_file = ""
	I1210 07:44:13.673515  412953 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1210 07:44:13.673522  412953 command_runner.go:130] > # blockio parameters.
	I1210 07:44:13.673725  412953 command_runner.go:130] > # blockio_reload = false
	I1210 07:44:13.673738  412953 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1210 07:44:13.673742  412953 command_runner.go:130] > # irqbalance daemon.
	I1210 07:44:13.673748  412953 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1210 07:44:13.673757  412953 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1210 07:44:13.673788  412953 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1210 07:44:13.673801  412953 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1210 07:44:13.673807  412953 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1210 07:44:13.673816  412953 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1210 07:44:13.673821  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.673830  412953 command_runner.go:130] > # rdt_config_file = ""
	I1210 07:44:13.673837  412953 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1210 07:44:13.674053  412953 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1210 07:44:13.674071  412953 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1210 07:44:13.674076  412953 command_runner.go:130] > # separate_pull_cgroup = ""
	I1210 07:44:13.674083  412953 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1210 07:44:13.674102  412953 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1210 07:44:13.674116  412953 command_runner.go:130] > # will be added.
	I1210 07:44:13.674121  412953 command_runner.go:130] > # default_capabilities = [
	I1210 07:44:13.674343  412953 command_runner.go:130] > # 	"CHOWN",
	I1210 07:44:13.674352  412953 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1210 07:44:13.674356  412953 command_runner.go:130] > # 	"FSETID",
	I1210 07:44:13.674359  412953 command_runner.go:130] > # 	"FOWNER",
	I1210 07:44:13.674363  412953 command_runner.go:130] > # 	"SETGID",
	I1210 07:44:13.674366  412953 command_runner.go:130] > # 	"SETUID",
	I1210 07:44:13.674423  412953 command_runner.go:130] > # 	"SETPCAP",
	I1210 07:44:13.674435  412953 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1210 07:44:13.674593  412953 command_runner.go:130] > # 	"KILL",
	I1210 07:44:13.674604  412953 command_runner.go:130] > # ]
	I1210 07:44:13.674621  412953 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1210 07:44:13.674632  412953 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1210 07:44:13.674812  412953 command_runner.go:130] > # add_inheritable_capabilities = false
	I1210 07:44:13.674829  412953 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1210 07:44:13.674836  412953 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 07:44:13.674844  412953 command_runner.go:130] > default_sysctls = [
	I1210 07:44:13.674849  412953 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1210 07:44:13.674855  412953 command_runner.go:130] > ]
	I1210 07:44:13.674860  412953 command_runner.go:130] > # List of devices on the host that a
	I1210 07:44:13.674883  412953 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1210 07:44:13.674902  412953 command_runner.go:130] > # allowed_devices = [
	I1210 07:44:13.675282  412953 command_runner.go:130] > # 	"/dev/fuse",
	I1210 07:44:13.675296  412953 command_runner.go:130] > # 	"/dev/net/tun",
	I1210 07:44:13.675300  412953 command_runner.go:130] > # ]
	I1210 07:44:13.675305  412953 command_runner.go:130] > # List of additional devices. specified as
	I1210 07:44:13.675313  412953 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1210 07:44:13.675339  412953 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1210 07:44:13.675346  412953 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 07:44:13.675350  412953 command_runner.go:130] > # additional_devices = [
	I1210 07:44:13.675524  412953 command_runner.go:130] > # ]
	I1210 07:44:13.675539  412953 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1210 07:44:13.675543  412953 command_runner.go:130] > # cdi_spec_dirs = [
	I1210 07:44:13.675549  412953 command_runner.go:130] > # 	"/etc/cdi",
	I1210 07:44:13.675552  412953 command_runner.go:130] > # 	"/var/run/cdi",
	I1210 07:44:13.675555  412953 command_runner.go:130] > # ]
	I1210 07:44:13.675562  412953 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1210 07:44:13.675584  412953 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1210 07:44:13.675594  412953 command_runner.go:130] > # Defaults to false.
	I1210 07:44:13.675951  412953 command_runner.go:130] > # device_ownership_from_security_context = false
	I1210 07:44:13.675970  412953 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1210 07:44:13.675978  412953 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1210 07:44:13.675982  412953 command_runner.go:130] > # hooks_dir = [
	I1210 07:44:13.676213  412953 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1210 07:44:13.676224  412953 command_runner.go:130] > # ]
	I1210 07:44:13.676231  412953 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1210 07:44:13.676237  412953 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1210 07:44:13.676246  412953 command_runner.go:130] > # its default mounts from the following two files:
	I1210 07:44:13.676261  412953 command_runner.go:130] > #
	I1210 07:44:13.676273  412953 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1210 07:44:13.676280  412953 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1210 07:44:13.676286  412953 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1210 07:44:13.676291  412953 command_runner.go:130] > #
	I1210 07:44:13.676298  412953 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1210 07:44:13.676304  412953 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1210 07:44:13.676313  412953 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1210 07:44:13.676318  412953 command_runner.go:130] > #      only add mounts it finds in this file.
	I1210 07:44:13.676321  412953 command_runner.go:130] > #
	I1210 07:44:13.676325  412953 command_runner.go:130] > # default_mounts_file = ""
	I1210 07:44:13.676345  412953 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1210 07:44:13.676358  412953 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1210 07:44:13.676363  412953 command_runner.go:130] > # pids_limit = -1
	I1210 07:44:13.676375  412953 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1210 07:44:13.676381  412953 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1210 07:44:13.676391  412953 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1210 07:44:13.676400  412953 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1210 07:44:13.676412  412953 command_runner.go:130] > # log_size_max = -1
	I1210 07:44:13.676423  412953 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1210 07:44:13.676626  412953 command_runner.go:130] > # log_to_journald = false
	I1210 07:44:13.676643  412953 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1210 07:44:13.676650  412953 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1210 07:44:13.676677  412953 command_runner.go:130] > # Path to directory for container attach sockets.
	I1210 07:44:13.676879  412953 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1210 07:44:13.676891  412953 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1210 07:44:13.676896  412953 command_runner.go:130] > # bind_mount_prefix = ""
	I1210 07:44:13.676903  412953 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1210 07:44:13.676909  412953 command_runner.go:130] > # read_only = false
	I1210 07:44:13.676916  412953 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1210 07:44:13.676942  412953 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1210 07:44:13.676953  412953 command_runner.go:130] > # live configuration reload.
	I1210 07:44:13.676956  412953 command_runner.go:130] > # log_level = "info"
	I1210 07:44:13.676967  412953 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1210 07:44:13.676977  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.677149  412953 command_runner.go:130] > # log_filter = ""
	I1210 07:44:13.677166  412953 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1210 07:44:13.677173  412953 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1210 07:44:13.677177  412953 command_runner.go:130] > # separated by comma.
	I1210 07:44:13.677186  412953 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 07:44:13.677212  412953 command_runner.go:130] > # uid_mappings = ""
	I1210 07:44:13.677225  412953 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1210 07:44:13.677231  412953 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1210 07:44:13.677238  412953 command_runner.go:130] > # separated by comma.
	I1210 07:44:13.677246  412953 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 07:44:13.677420  412953 command_runner.go:130] > # gid_mappings = ""
	I1210 07:44:13.677432  412953 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1210 07:44:13.677439  412953 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 07:44:13.677446  412953 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 07:44:13.677455  412953 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 07:44:13.677480  412953 command_runner.go:130] > # minimum_mappable_uid = -1
	I1210 07:44:13.677493  412953 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1210 07:44:13.677500  412953 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 07:44:13.677512  412953 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 07:44:13.677522  412953 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 07:44:13.677681  412953 command_runner.go:130] > # minimum_mappable_gid = -1
	I1210 07:44:13.677697  412953 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1210 07:44:13.677705  412953 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1210 07:44:13.677711  412953 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1210 07:44:13.677936  412953 command_runner.go:130] > # ctr_stop_timeout = 30
	I1210 07:44:13.677953  412953 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1210 07:44:13.677960  412953 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1210 07:44:13.677965  412953 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1210 07:44:13.677970  412953 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1210 07:44:13.677991  412953 command_runner.go:130] > # drop_infra_ctr = true
	I1210 07:44:13.678004  412953 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1210 07:44:13.678011  412953 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1210 07:44:13.678020  412953 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1210 07:44:13.678031  412953 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1210 07:44:13.678039  412953 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1210 07:44:13.678048  412953 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1210 07:44:13.678054  412953 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1210 07:44:13.678068  412953 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1210 07:44:13.678282  412953 command_runner.go:130] > # shared_cpuset = ""
	I1210 07:44:13.678299  412953 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1210 07:44:13.678306  412953 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1210 07:44:13.678310  412953 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1210 07:44:13.678328  412953 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1210 07:44:13.678337  412953 command_runner.go:130] > # pinns_path = ""
	I1210 07:44:13.678343  412953 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1210 07:44:13.678349  412953 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1210 07:44:13.678540  412953 command_runner.go:130] > # enable_criu_support = true
	I1210 07:44:13.678551  412953 command_runner.go:130] > # Enable/disable the generation of the container,
	I1210 07:44:13.678558  412953 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1210 07:44:13.678563  412953 command_runner.go:130] > # enable_pod_events = false
	I1210 07:44:13.678572  412953 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1210 07:44:13.678599  412953 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1210 07:44:13.678604  412953 command_runner.go:130] > # default_runtime = "crun"
	I1210 07:44:13.678609  412953 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1210 07:44:13.678622  412953 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1210 07:44:13.678632  412953 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1210 07:44:13.678642  412953 command_runner.go:130] > # creation as a file is not desired either.
	I1210 07:44:13.678651  412953 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1210 07:44:13.678663  412953 command_runner.go:130] > # the hostname is being managed dynamically.
	I1210 07:44:13.678672  412953 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1210 07:44:13.678923  412953 command_runner.go:130] > # ]
	I1210 07:44:13.678950  412953 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1210 07:44:13.678958  412953 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1210 07:44:13.678972  412953 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1210 07:44:13.678982  412953 command_runner.go:130] > # Each entry in the table should follow the format:
	I1210 07:44:13.678985  412953 command_runner.go:130] > #
	I1210 07:44:13.678990  412953 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1210 07:44:13.678995  412953 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1210 07:44:13.679001  412953 command_runner.go:130] > # runtime_type = "oci"
	I1210 07:44:13.679006  412953 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1210 07:44:13.679035  412953 command_runner.go:130] > # inherit_default_runtime = false
	I1210 07:44:13.679045  412953 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1210 07:44:13.679050  412953 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1210 07:44:13.679054  412953 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1210 07:44:13.679060  412953 command_runner.go:130] > # monitor_env = []
	I1210 07:44:13.679065  412953 command_runner.go:130] > # privileged_without_host_devices = false
	I1210 07:44:13.679069  412953 command_runner.go:130] > # allowed_annotations = []
	I1210 07:44:13.679076  412953 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1210 07:44:13.679085  412953 command_runner.go:130] > # no_sync_log = false
	I1210 07:44:13.679101  412953 command_runner.go:130] > # default_annotations = {}
	I1210 07:44:13.679107  412953 command_runner.go:130] > # stream_websockets = false
	I1210 07:44:13.679111  412953 command_runner.go:130] > # seccomp_profile = ""
	I1210 07:44:13.679142  412953 command_runner.go:130] > # Where:
	I1210 07:44:13.679152  412953 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1210 07:44:13.679158  412953 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1210 07:44:13.679174  412953 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1210 07:44:13.679188  412953 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1210 07:44:13.679194  412953 command_runner.go:130] > #   in $PATH.
	I1210 07:44:13.679200  412953 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1210 07:44:13.679207  412953 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1210 07:44:13.679213  412953 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1210 07:44:13.679219  412953 command_runner.go:130] > #   state.
	I1210 07:44:13.679225  412953 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1210 07:44:13.679231  412953 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1210 07:44:13.679240  412953 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1210 07:44:13.679252  412953 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1210 07:44:13.679260  412953 command_runner.go:130] > #   the values from the default runtime on load time.
	I1210 07:44:13.679267  412953 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1210 07:44:13.679274  412953 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1210 07:44:13.679281  412953 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1210 07:44:13.679291  412953 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1210 07:44:13.679296  412953 command_runner.go:130] > #   The currently recognized values are:
	I1210 07:44:13.679302  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1210 07:44:13.679311  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1210 07:44:13.679325  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1210 07:44:13.679338  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1210 07:44:13.679345  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1210 07:44:13.679357  412953 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1210 07:44:13.679365  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1210 07:44:13.679374  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1210 07:44:13.679380  412953 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1210 07:44:13.679398  412953 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1210 07:44:13.679409  412953 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1210 07:44:13.679420  412953 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1210 07:44:13.679430  412953 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1210 07:44:13.679436  412953 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1210 07:44:13.679445  412953 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1210 07:44:13.679452  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1210 07:44:13.679461  412953 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1210 07:44:13.679464  412953 command_runner.go:130] > #   deprecated option "conmon".
	I1210 07:44:13.679478  412953 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1210 07:44:13.679487  412953 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1210 07:44:13.679493  412953 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1210 07:44:13.679503  412953 command_runner.go:130] > #   should be moved to the container's cgroup
	I1210 07:44:13.679511  412953 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1210 07:44:13.679518  412953 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1210 07:44:13.679525  412953 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1210 07:44:13.679531  412953 command_runner.go:130] > #   conmon-rs by using:
	I1210 07:44:13.679539  412953 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1210 07:44:13.679560  412953 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1210 07:44:13.679570  412953 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1210 07:44:13.679579  412953 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1210 07:44:13.679584  412953 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1210 07:44:13.679593  412953 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1210 07:44:13.679603  412953 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1210 07:44:13.679608  412953 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1210 07:44:13.679617  412953 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1210 07:44:13.679637  412953 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1210 07:44:13.679641  412953 command_runner.go:130] > #   when a machine crash happens.
	I1210 07:44:13.679649  412953 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1210 07:44:13.679659  412953 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1210 07:44:13.679667  412953 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1210 07:44:13.679675  412953 command_runner.go:130] > #   seccomp profile for the runtime.
	I1210 07:44:13.679681  412953 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1210 07:44:13.679700  412953 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1210 07:44:13.679707  412953 command_runner.go:130] > #
	I1210 07:44:13.679712  412953 command_runner.go:130] > # Using the seccomp notifier feature:
	I1210 07:44:13.679716  412953 command_runner.go:130] > #
	I1210 07:44:13.679727  412953 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1210 07:44:13.679736  412953 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1210 07:44:13.679742  412953 command_runner.go:130] > #
	I1210 07:44:13.679749  412953 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1210 07:44:13.679756  412953 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1210 07:44:13.679761  412953 command_runner.go:130] > #
	I1210 07:44:13.679773  412953 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1210 07:44:13.679780  412953 command_runner.go:130] > # feature.
	I1210 07:44:13.679782  412953 command_runner.go:130] > #
	I1210 07:44:13.679788  412953 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1210 07:44:13.679799  412953 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1210 07:44:13.679805  412953 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1210 07:44:13.679811  412953 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1210 07:44:13.679819  412953 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1210 07:44:13.679824  412953 command_runner.go:130] > #
	I1210 07:44:13.679831  412953 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1210 07:44:13.679840  412953 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1210 07:44:13.679848  412953 command_runner.go:130] > #
	I1210 07:44:13.679858  412953 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1210 07:44:13.679864  412953 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1210 07:44:13.679869  412953 command_runner.go:130] > #
	I1210 07:44:13.679875  412953 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1210 07:44:13.679881  412953 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1210 07:44:13.679887  412953 command_runner.go:130] > # limitation.
	I1210 07:44:13.679891  412953 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1210 07:44:13.679896  412953 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1210 07:44:13.679902  412953 command_runner.go:130] > runtime_type = ""
	I1210 07:44:13.679909  412953 command_runner.go:130] > runtime_root = "/run/crun"
	I1210 07:44:13.679913  412953 command_runner.go:130] > inherit_default_runtime = false
	I1210 07:44:13.679932  412953 command_runner.go:130] > runtime_config_path = ""
	I1210 07:44:13.679940  412953 command_runner.go:130] > container_min_memory = ""
	I1210 07:44:13.679944  412953 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1210 07:44:13.679948  412953 command_runner.go:130] > monitor_cgroup = "pod"
	I1210 07:44:13.679957  412953 command_runner.go:130] > monitor_exec_cgroup = ""
	I1210 07:44:13.679961  412953 command_runner.go:130] > allowed_annotations = [
	I1210 07:44:13.680169  412953 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1210 07:44:13.680183  412953 command_runner.go:130] > ]
	I1210 07:44:13.680190  412953 command_runner.go:130] > privileged_without_host_devices = false
	I1210 07:44:13.680195  412953 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1210 07:44:13.680200  412953 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1210 07:44:13.680204  412953 command_runner.go:130] > runtime_type = ""
	I1210 07:44:13.680218  412953 command_runner.go:130] > runtime_root = "/run/runc"
	I1210 07:44:13.680228  412953 command_runner.go:130] > inherit_default_runtime = false
	I1210 07:44:13.680233  412953 command_runner.go:130] > runtime_config_path = ""
	I1210 07:44:13.680237  412953 command_runner.go:130] > container_min_memory = ""
	I1210 07:44:13.680244  412953 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1210 07:44:13.680248  412953 command_runner.go:130] > monitor_cgroup = "pod"
	I1210 07:44:13.680257  412953 command_runner.go:130] > monitor_exec_cgroup = ""
	I1210 07:44:13.680461  412953 command_runner.go:130] > privileged_without_host_devices = false
	I1210 07:44:13.680480  412953 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1210 07:44:13.680486  412953 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1210 07:44:13.680503  412953 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1210 07:44:13.680522  412953 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1210 07:44:13.680533  412953 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1210 07:44:13.680547  412953 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1210 07:44:13.680554  412953 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1210 07:44:13.680563  412953 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1210 07:44:13.680579  412953 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1210 07:44:13.680591  412953 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1210 07:44:13.680597  412953 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1210 07:44:13.680609  412953 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1210 07:44:13.680613  412953 command_runner.go:130] > # Example:
	I1210 07:44:13.680617  412953 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1210 07:44:13.680625  412953 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1210 07:44:13.680632  412953 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1210 07:44:13.680643  412953 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1210 07:44:13.680656  412953 command_runner.go:130] > # cpuset = "0-1"
	I1210 07:44:13.680660  412953 command_runner.go:130] > # cpushares = "5"
	I1210 07:44:13.680672  412953 command_runner.go:130] > # cpuquota = "1000"
	I1210 07:44:13.680676  412953 command_runner.go:130] > # cpuperiod = "100000"
	I1210 07:44:13.680680  412953 command_runner.go:130] > # cpulimit = "35"
	I1210 07:44:13.680686  412953 command_runner.go:130] > # Where:
	I1210 07:44:13.680691  412953 command_runner.go:130] > # The workload name is workload-type.
	I1210 07:44:13.680706  412953 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1210 07:44:13.680717  412953 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1210 07:44:13.680730  412953 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1210 07:44:13.680742  412953 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1210 07:44:13.680748  412953 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1210 07:44:13.680756  412953 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1210 07:44:13.680763  412953 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1210 07:44:13.680767  412953 command_runner.go:130] > # Default value is set to true
	I1210 07:44:13.681004  412953 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1210 07:44:13.681022  412953 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1210 07:44:13.681028  412953 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1210 07:44:13.681032  412953 command_runner.go:130] > # Default value is set to 'false'
	I1210 07:44:13.681046  412953 command_runner.go:130] > # disable_hostport_mapping = false
	I1210 07:44:13.681057  412953 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1210 07:44:13.681066  412953 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1210 07:44:13.681072  412953 command_runner.go:130] > # timezone = ""
	I1210 07:44:13.681078  412953 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1210 07:44:13.681082  412953 command_runner.go:130] > #
	I1210 07:44:13.681089  412953 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1210 07:44:13.681101  412953 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1210 07:44:13.681105  412953 command_runner.go:130] > [crio.image]
	I1210 07:44:13.681112  412953 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1210 07:44:13.681133  412953 command_runner.go:130] > # default_transport = "docker://"
	I1210 07:44:13.681145  412953 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1210 07:44:13.681152  412953 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1210 07:44:13.681158  412953 command_runner.go:130] > # global_auth_file = ""
	I1210 07:44:13.681163  412953 command_runner.go:130] > # The image used to instantiate infra containers.
	I1210 07:44:13.681168  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.681175  412953 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1210 07:44:13.681182  412953 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1210 07:44:13.681198  412953 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1210 07:44:13.681207  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.681403  412953 command_runner.go:130] > # pause_image_auth_file = ""
	I1210 07:44:13.681421  412953 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1210 07:44:13.681429  412953 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1210 07:44:13.681436  412953 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1210 07:44:13.681442  412953 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1210 07:44:13.681460  412953 command_runner.go:130] > # pause_command = "/pause"
	I1210 07:44:13.681466  412953 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1210 07:44:13.681473  412953 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1210 07:44:13.681481  412953 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1210 07:44:13.681487  412953 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1210 07:44:13.681495  412953 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1210 07:44:13.681508  412953 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1210 07:44:13.681512  412953 command_runner.go:130] > # pinned_images = [
	I1210 07:44:13.681700  412953 command_runner.go:130] > # ]
	I1210 07:44:13.681712  412953 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1210 07:44:13.681720  412953 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1210 07:44:13.681726  412953 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1210 07:44:13.681733  412953 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1210 07:44:13.681759  412953 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1210 07:44:13.681771  412953 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1210 07:44:13.681777  412953 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1210 07:44:13.681786  412953 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1210 07:44:13.681793  412953 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1210 07:44:13.681800  412953 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1210 07:44:13.681806  412953 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1210 07:44:13.682016  412953 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1210 07:44:13.682034  412953 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1210 07:44:13.682042  412953 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1210 07:44:13.682046  412953 command_runner.go:130] > # changing them here.
	I1210 07:44:13.682052  412953 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1210 07:44:13.682069  412953 command_runner.go:130] > # insecure_registries = [
	I1210 07:44:13.682078  412953 command_runner.go:130] > # ]
	I1210 07:44:13.682085  412953 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1210 07:44:13.682090  412953 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1210 07:44:13.682257  412953 command_runner.go:130] > # image_volumes = "mkdir"
	I1210 07:44:13.682273  412953 command_runner.go:130] > # Temporary directory to use for storing big files
	I1210 07:44:13.682285  412953 command_runner.go:130] > # big_files_temporary_dir = ""
	I1210 07:44:13.682292  412953 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1210 07:44:13.682299  412953 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1210 07:44:13.682504  412953 command_runner.go:130] > # auto_reload_registries = false
	I1210 07:44:13.682520  412953 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1210 07:44:13.682532  412953 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1210 07:44:13.682540  412953 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1210 07:44:13.682567  412953 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1210 07:44:13.682578  412953 command_runner.go:130] > # The mode of short name resolution.
	I1210 07:44:13.682585  412953 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1210 07:44:13.682595  412953 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1210 07:44:13.682600  412953 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1210 07:44:13.682615  412953 command_runner.go:130] > # short_name_mode = "enforcing"
	I1210 07:44:13.682622  412953 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1210 07:44:13.682630  412953 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1210 07:44:13.683045  412953 command_runner.go:130] > # oci_artifact_mount_support = true
	I1210 07:44:13.683063  412953 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1210 07:44:13.683080  412953 command_runner.go:130] > # CNI plugins.
	I1210 07:44:13.683084  412953 command_runner.go:130] > [crio.network]
	I1210 07:44:13.683091  412953 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1210 07:44:13.683100  412953 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1210 07:44:13.683104  412953 command_runner.go:130] > # cni_default_network = ""
	I1210 07:44:13.683110  412953 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1210 07:44:13.683116  412953 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1210 07:44:13.683122  412953 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1210 07:44:13.683126  412953 command_runner.go:130] > # plugin_dirs = [
	I1210 07:44:13.683439  412953 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1210 07:44:13.683727  412953 command_runner.go:130] > # ]
	I1210 07:44:13.683742  412953 command_runner.go:130] > # List of included pod metrics.
	I1210 07:44:13.684014  412953 command_runner.go:130] > # included_pod_metrics = [
	I1210 07:44:13.684312  412953 command_runner.go:130] > # ]
	I1210 07:44:13.684328  412953 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1210 07:44:13.684333  412953 command_runner.go:130] > [crio.metrics]
	I1210 07:44:13.684339  412953 command_runner.go:130] > # Globally enable or disable metrics support.
	I1210 07:44:13.684905  412953 command_runner.go:130] > # enable_metrics = false
	I1210 07:44:13.684921  412953 command_runner.go:130] > # Specify enabled metrics collectors.
	I1210 07:44:13.684926  412953 command_runner.go:130] > # Per default all metrics are enabled.
	I1210 07:44:13.684933  412953 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1210 07:44:13.684946  412953 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1210 07:44:13.684969  412953 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1210 07:44:13.685240  412953 command_runner.go:130] > # metrics_collectors = [
	I1210 07:44:13.685580  412953 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1210 07:44:13.685893  412953 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1210 07:44:13.686203  412953 command_runner.go:130] > # 	"containers_oom_total",
	I1210 07:44:13.686514  412953 command_runner.go:130] > # 	"processes_defunct",
	I1210 07:44:13.686821  412953 command_runner.go:130] > # 	"operations_total",
	I1210 07:44:13.687152  412953 command_runner.go:130] > # 	"operations_latency_seconds",
	I1210 07:44:13.687476  412953 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1210 07:44:13.687786  412953 command_runner.go:130] > # 	"operations_errors_total",
	I1210 07:44:13.688090  412953 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1210 07:44:13.688395  412953 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1210 07:44:13.688727  412953 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1210 07:44:13.689070  412953 command_runner.go:130] > # 	"image_pulls_success_total",
	I1210 07:44:13.689083  412953 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1210 07:44:13.689089  412953 command_runner.go:130] > # 	"containers_oom_count_total",
	I1210 07:44:13.689093  412953 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1210 07:44:13.689098  412953 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1210 07:44:13.689634  412953 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1210 07:44:13.689646  412953 command_runner.go:130] > # ]
	I1210 07:44:13.689654  412953 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1210 07:44:13.689658  412953 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1210 07:44:13.689671  412953 command_runner.go:130] > # The port on which the metrics server will listen.
	I1210 07:44:13.689696  412953 command_runner.go:130] > # metrics_port = 9090
	I1210 07:44:13.689701  412953 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1210 07:44:13.689706  412953 command_runner.go:130] > # metrics_socket = ""
	I1210 07:44:13.689716  412953 command_runner.go:130] > # The certificate for the secure metrics server.
	I1210 07:44:13.689722  412953 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1210 07:44:13.689731  412953 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1210 07:44:13.689737  412953 command_runner.go:130] > # certificate on any modification event.
	I1210 07:44:13.689741  412953 command_runner.go:130] > # metrics_cert = ""
	I1210 07:44:13.689746  412953 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1210 07:44:13.689751  412953 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1210 07:44:13.689764  412953 command_runner.go:130] > # metrics_key = ""
	I1210 07:44:13.689770  412953 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1210 07:44:13.689774  412953 command_runner.go:130] > [crio.tracing]
	I1210 07:44:13.689781  412953 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1210 07:44:13.689785  412953 command_runner.go:130] > # enable_tracing = false
	I1210 07:44:13.689792  412953 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1210 07:44:13.689799  412953 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1210 07:44:13.689806  412953 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1210 07:44:13.689833  412953 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1210 07:44:13.689842  412953 command_runner.go:130] > # CRI-O NRI configuration.
	I1210 07:44:13.689845  412953 command_runner.go:130] > [crio.nri]
	I1210 07:44:13.689850  412953 command_runner.go:130] > # Globally enable or disable NRI.
	I1210 07:44:13.689861  412953 command_runner.go:130] > # enable_nri = true
	I1210 07:44:13.689865  412953 command_runner.go:130] > # NRI socket to listen on.
	I1210 07:44:13.689873  412953 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1210 07:44:13.689877  412953 command_runner.go:130] > # NRI plugin directory to use.
	I1210 07:44:13.689882  412953 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1210 07:44:13.689890  412953 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1210 07:44:13.689894  412953 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1210 07:44:13.689900  412953 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1210 07:44:13.689965  412953 command_runner.go:130] > # nri_disable_connections = false
	I1210 07:44:13.689975  412953 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1210 07:44:13.689991  412953 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1210 07:44:13.689997  412953 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1210 07:44:13.690006  412953 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1210 07:44:13.690011  412953 command_runner.go:130] > # NRI default validator configuration.
	I1210 07:44:13.690018  412953 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1210 07:44:13.690027  412953 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1210 07:44:13.690036  412953 command_runner.go:130] > # can be restricted/rejected:
	I1210 07:44:13.690044  412953 command_runner.go:130] > # - OCI hook injection
	I1210 07:44:13.690060  412953 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1210 07:44:13.690068  412953 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1210 07:44:13.690072  412953 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1210 07:44:13.690076  412953 command_runner.go:130] > # - adjustment of linux namespaces
	I1210 07:44:13.690083  412953 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1210 07:44:13.690093  412953 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1210 07:44:13.690099  412953 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1210 07:44:13.690107  412953 command_runner.go:130] > #
	I1210 07:44:13.690111  412953 command_runner.go:130] > # [crio.nri.default_validator]
	I1210 07:44:13.690115  412953 command_runner.go:130] > # nri_enable_default_validator = false
	I1210 07:44:13.690122  412953 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1210 07:44:13.690134  412953 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1210 07:44:13.690148  412953 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1210 07:44:13.690154  412953 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1210 07:44:13.690159  412953 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1210 07:44:13.690165  412953 command_runner.go:130] > # nri_validator_required_plugins = [
	I1210 07:44:13.690168  412953 command_runner.go:130] > # ]
	I1210 07:44:13.690174  412953 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1210 07:44:13.690182  412953 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1210 07:44:13.690192  412953 command_runner.go:130] > [crio.stats]
	I1210 07:44:13.690198  412953 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1210 07:44:13.690212  412953 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1210 07:44:13.690219  412953 command_runner.go:130] > # stats_collection_period = 0
	I1210 07:44:13.690225  412953 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1210 07:44:13.690232  412953 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1210 07:44:13.690243  412953 command_runner.go:130] > # collection_period = 0
	I1210 07:44:13.692149  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.648702659Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1210 07:44:13.692177  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.648881459Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1210 07:44:13.692188  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.648978856Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1210 07:44:13.692196  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.649067965Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1210 07:44:13.692212  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.649235303Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.692221  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.649618857Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1210 07:44:13.692237  412953 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1210 07:44:13.692317  412953 cni.go:84] Creating CNI manager for ""
	I1210 07:44:13.692335  412953 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:44:13.692359  412953 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:44:13.692385  412953 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-314220 NodeName:functional-314220 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:44:13.692523  412953 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-314220"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:44:13.692606  412953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:44:13.699318  412953 command_runner.go:130] > kubeadm
	I1210 07:44:13.699338  412953 command_runner.go:130] > kubectl
	I1210 07:44:13.699343  412953 command_runner.go:130] > kubelet
	I1210 07:44:13.700197  412953 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:44:13.700295  412953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:44:13.707538  412953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 07:44:13.720130  412953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:44:13.732445  412953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1210 07:44:13.744899  412953 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:44:13.748570  412953 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 07:44:13.748818  412953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:44:13.875367  412953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:44:13.911048  412953 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220 for IP: 192.168.49.2
	I1210 07:44:13.911077  412953 certs.go:195] generating shared ca certs ...
	I1210 07:44:13.911094  412953 certs.go:227] acquiring lock for ca certs: {Name:mk8f0c12407c084bd20b81428ac0dabca9a8cbd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:44:13.911231  412953 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key
	I1210 07:44:13.911285  412953 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key
	I1210 07:44:13.911297  412953 certs.go:257] generating profile certs ...
	I1210 07:44:13.911404  412953 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key
	I1210 07:44:13.911477  412953 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key.8ae59347
	I1210 07:44:13.911525  412953 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key
	I1210 07:44:13.911539  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 07:44:13.911552  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 07:44:13.911567  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 07:44:13.911578  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 07:44:13.911593  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 07:44:13.911610  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 07:44:13.911622  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 07:44:13.911637  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 07:44:13.911683  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem (1338 bytes)
	W1210 07:44:13.911717  412953 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528_empty.pem, impossibly tiny 0 bytes
	I1210 07:44:13.911729  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:44:13.911762  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:44:13.911791  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:44:13.911819  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem (1675 bytes)
	I1210 07:44:13.911865  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 07:44:13.911900  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> /usr/share/ca-certificates/3785282.pem
	I1210 07:44:13.911918  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:13.911928  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem -> /usr/share/ca-certificates/378528.pem
	I1210 07:44:13.912577  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:44:13.931574  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 07:44:13.949287  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:44:13.966704  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:44:13.984537  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:44:14.005273  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:44:14.024726  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:44:14.043246  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:44:14.061500  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /usr/share/ca-certificates/3785282.pem (1708 bytes)
	I1210 07:44:14.078597  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:44:14.096003  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem --> /usr/share/ca-certificates/378528.pem (1338 bytes)
	I1210 07:44:14.113316  412953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:44:14.125784  412953 ssh_runner.go:195] Run: openssl version
	I1210 07:44:14.132223  412953 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 07:44:14.132300  412953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.139621  412953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3785282.pem /etc/ssl/certs/3785282.pem
	I1210 07:44:14.146891  412953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.150749  412953 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 07:35 /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.150804  412953 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 07:35 /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.150854  412953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.191223  412953 command_runner.go:130] > 3ec20f2e
	I1210 07:44:14.191672  412953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:44:14.199095  412953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.206573  412953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:44:14.214321  412953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.218345  412953 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 07:25 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.218446  412953 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 07:25 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.218516  412953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.259240  412953 command_runner.go:130] > b5213941
	I1210 07:44:14.259776  412953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:44:14.267399  412953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.274814  412953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/378528.pem /etc/ssl/certs/378528.pem
	I1210 07:44:14.282253  412953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.286034  412953 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 07:35 /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.286101  412953 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 07:35 /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.286170  412953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.327536  412953 command_runner.go:130] > 51391683
	I1210 07:44:14.327674  412953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:44:14.335034  412953 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:44:14.338581  412953 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:44:14.338609  412953 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1210 07:44:14.338616  412953 command_runner.go:130] > Device: 259,1	Inode: 1322411     Links: 1
	I1210 07:44:14.338623  412953 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 07:44:14.338628  412953 command_runner.go:130] > Access: 2025-12-10 07:40:07.276287392 +0000
	I1210 07:44:14.338634  412953 command_runner.go:130] > Modify: 2025-12-10 07:36:02.667723996 +0000
	I1210 07:44:14.338639  412953 command_runner.go:130] > Change: 2025-12-10 07:36:02.667723996 +0000
	I1210 07:44:14.338644  412953 command_runner.go:130] >  Birth: 2025-12-10 07:36:02.667723996 +0000
	I1210 07:44:14.338702  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:44:14.379186  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.379683  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:44:14.420781  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.421255  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:44:14.461926  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.462055  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:44:14.509912  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.510522  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:44:14.558004  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.558477  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:44:14.599044  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.599455  412953 kubeadm.go:401] StartCluster: {Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:44:14.599550  412953 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:44:14.599615  412953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:44:14.630244  412953 cri.go:89] found id: ""
	I1210 07:44:14.630352  412953 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:44:14.638132  412953 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 07:44:14.638152  412953 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 07:44:14.638158  412953 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 07:44:14.638171  412953 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:44:14.638176  412953 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:44:14.638225  412953 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:44:14.645608  412953 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:44:14.646002  412953 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-314220" does not appear in /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:14.646112  412953 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-376671/kubeconfig needs updating (will repair): [kubeconfig missing "functional-314220" cluster setting kubeconfig missing "functional-314220" context setting]
	I1210 07:44:14.646387  412953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/kubeconfig: {Name:mk62f1b4a63164643ed31a5211abdadfc49db863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:44:14.646808  412953 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:14.646962  412953 kapi.go:59] client config for functional-314220: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key", CAFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 07:44:14.647769  412953 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 07:44:14.647791  412953 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 07:44:14.647797  412953 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 07:44:14.647801  412953 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 07:44:14.647806  412953 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 07:44:14.647858  412953 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 07:44:14.648134  412953 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:44:14.656007  412953 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1210 07:44:14.656041  412953 kubeadm.go:602] duration metric: took 17.859608ms to restartPrimaryControlPlane
	I1210 07:44:14.656051  412953 kubeadm.go:403] duration metric: took 56.601079ms to StartCluster
	I1210 07:44:14.656066  412953 settings.go:142] acquiring lock: {Name:mk83336eaf1e9f7632e16e15e8d9e14eb0e0d0c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:44:14.656132  412953 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:14.656799  412953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/kubeconfig: {Name:mk62f1b4a63164643ed31a5211abdadfc49db863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:44:14.657004  412953 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 07:44:14.657416  412953 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:44:14.657431  412953 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:44:14.658092  412953 addons.go:70] Setting storage-provisioner=true in profile "functional-314220"
	I1210 07:44:14.658110  412953 addons.go:239] Setting addon storage-provisioner=true in "functional-314220"
	I1210 07:44:14.658137  412953 host.go:66] Checking if "functional-314220" exists ...
	I1210 07:44:14.658702  412953 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:44:14.665050  412953 addons.go:70] Setting default-storageclass=true in profile "functional-314220"
	I1210 07:44:14.665125  412953 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-314220"
	I1210 07:44:14.665550  412953 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:44:14.671074  412953 out.go:179] * Verifying Kubernetes components...
	I1210 07:44:14.676445  412953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:44:14.698425  412953 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:44:14.701187  412953 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:14.701211  412953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:44:14.701278  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:14.705662  412953 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:14.705841  412953 kapi.go:59] client config for functional-314220: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key", CAFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 07:44:14.706176  412953 addons.go:239] Setting addon default-storageclass=true in "functional-314220"
	I1210 07:44:14.706207  412953 host.go:66] Checking if "functional-314220" exists ...
	I1210 07:44:14.706646  412953 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:44:14.744732  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:14.744810  412953 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:14.744830  412953 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:44:14.744900  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:14.778977  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:14.876345  412953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:44:14.912899  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:14.922881  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:15.662190  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:15.662227  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.662277  412953 retry.go:31] will retry after 311.954263ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.662347  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:15.662381  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.662389  412953 retry.go:31] will retry after 234.07921ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.662447  412953 node_ready.go:35] waiting up to 6m0s for node "functional-314220" to be "Ready" ...
	I1210 07:44:15.662665  412953 type.go:168] "Request Body" body=""
	I1210 07:44:15.662765  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:15.663157  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:15.897488  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:15.957295  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:15.957408  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.957431  412953 retry.go:31] will retry after 307.155853ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.974530  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:16.030916  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:16.034621  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.034655  412953 retry.go:31] will retry after 246.948718ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.162840  412953 type.go:168] "Request Body" body=""
	I1210 07:44:16.162973  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:16.163310  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:16.265735  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:16.282284  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:16.335651  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:16.339071  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.339103  412953 retry.go:31] will retry after 647.058742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.361763  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:16.361804  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.361822  412953 retry.go:31] will retry after 514.560746ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.663231  412953 type.go:168] "Request Body" body=""
	I1210 07:44:16.663327  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:16.663641  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:16.877219  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:16.942769  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:16.942876  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.942918  412953 retry.go:31] will retry after 1.098847883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.987296  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:17.051987  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:17.055923  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:17.055964  412953 retry.go:31] will retry after 522.145884ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:17.163324  412953 type.go:168] "Request Body" body=""
	I1210 07:44:17.163405  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:17.163711  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:17.578391  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:17.635896  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:17.639746  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:17.639777  412953 retry.go:31] will retry after 768.766099ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:17.662946  412953 type.go:168] "Request Body" body=""
	I1210 07:44:17.663049  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:17.663399  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:17.663474  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:18.042986  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:18.101043  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:18.104777  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:18.104811  412953 retry.go:31] will retry after 877.527078ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:18.163066  412953 type.go:168] "Request Body" body=""
	I1210 07:44:18.163146  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:18.163494  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:18.409040  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:18.473157  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:18.473195  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:18.473221  412953 retry.go:31] will retry after 1.043117699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:18.663503  412953 type.go:168] "Request Body" body=""
	I1210 07:44:18.663629  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:18.663908  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:18.983598  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:19.054379  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:19.057795  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:19.057861  412953 retry.go:31] will retry after 2.806616267s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:19.163140  412953 type.go:168] "Request Body" body=""
	I1210 07:44:19.163219  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:19.163514  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:19.517094  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:19.577109  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:19.577146  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:19.577191  412953 retry.go:31] will retry after 2.260515502s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:19.663401  412953 type.go:168] "Request Body" body=""
	I1210 07:44:19.663487  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:19.663841  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:19.663910  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:20.163656  412953 type.go:168] "Request Body" body=""
	I1210 07:44:20.163728  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:20.164096  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:20.662791  412953 type.go:168] "Request Body" body=""
	I1210 07:44:20.662881  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:20.663196  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:21.162812  412953 type.go:168] "Request Body" body=""
	I1210 07:44:21.162886  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:21.163185  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:21.662729  412953 type.go:168] "Request Body" body=""
	I1210 07:44:21.662808  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:21.663095  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:21.838627  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:21.865153  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:21.916464  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:21.916504  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:21.916523  412953 retry.go:31] will retry after 2.650338189s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:21.931641  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:21.931686  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:21.931712  412953 retry.go:31] will retry after 2.932548046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:22.163174  412953 type.go:168] "Request Body" body=""
	I1210 07:44:22.163252  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:22.163593  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:22.163668  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:22.663491  412953 type.go:168] "Request Body" body=""
	I1210 07:44:22.663596  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:22.663955  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:23.162683  412953 type.go:168] "Request Body" body=""
	I1210 07:44:23.162754  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:23.163096  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:23.662729  412953 type.go:168] "Request Body" body=""
	I1210 07:44:23.662804  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:23.663201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:24.162801  412953 type.go:168] "Request Body" body=""
	I1210 07:44:24.162914  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:24.163280  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:24.567824  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:24.621746  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:24.625216  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:24.625246  412953 retry.go:31] will retry after 7.727905191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:24.663687  412953 type.go:168] "Request Body" body=""
	I1210 07:44:24.663760  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:24.664012  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:24.664064  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:24.864476  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:24.921495  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:24.921557  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:24.921581  412953 retry.go:31] will retry after 3.915945796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:25.162916  412953 type.go:168] "Request Body" body=""
	I1210 07:44:25.162995  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:25.163327  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:25.663045  412953 type.go:168] "Request Body" body=""
	I1210 07:44:25.663124  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:25.663415  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:26.163115  412953 type.go:168] "Request Body" body=""
	I1210 07:44:26.163196  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:26.163520  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:26.663439  412953 type.go:168] "Request Body" body=""
	I1210 07:44:26.663518  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:26.663824  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:27.163578  412953 type.go:168] "Request Body" body=""
	I1210 07:44:27.163681  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:27.164000  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:27.164069  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:27.662756  412953 type.go:168] "Request Body" body=""
	I1210 07:44:27.662823  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:27.663205  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:28.162790  412953 type.go:168] "Request Body" body=""
	I1210 07:44:28.162864  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:28.163219  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:28.662792  412953 type.go:168] "Request Body" body=""
	I1210 07:44:28.662869  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:28.663208  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:28.838651  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:28.899244  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:28.899280  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:28.899298  412953 retry.go:31] will retry after 8.041674514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:29.162702  412953 type.go:168] "Request Body" body=""
	I1210 07:44:29.162772  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:29.163052  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:29.662768  412953 type.go:168] "Request Body" body=""
	I1210 07:44:29.662841  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:29.663169  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:29.663226  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:30.162886  412953 type.go:168] "Request Body" body=""
	I1210 07:44:30.162968  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:30.163308  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:30.662996  412953 type.go:168] "Request Body" body=""
	I1210 07:44:30.663089  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:30.663373  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:31.163117  412953 type.go:168] "Request Body" body=""
	I1210 07:44:31.163198  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:31.163590  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:31.662807  412953 type.go:168] "Request Body" body=""
	I1210 07:44:31.662884  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:31.663200  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:31.663248  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:32.162759  412953 type.go:168] "Request Body" body=""
	I1210 07:44:32.162841  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:32.163175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:32.353668  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:32.409993  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:32.413403  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:32.413432  412953 retry.go:31] will retry after 6.914628842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:32.662780  412953 type.go:168] "Request Body" body=""
	I1210 07:44:32.662856  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:32.663233  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:33.162956  412953 type.go:168] "Request Body" body=""
	I1210 07:44:33.163049  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:33.163356  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:33.662689  412953 type.go:168] "Request Body" body=""
	I1210 07:44:33.662755  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:33.663031  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:34.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:44:34.162848  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:34.163199  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:34.163258  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:34.663111  412953 type.go:168] "Request Body" body=""
	I1210 07:44:34.663191  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:34.663487  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:35.163272  412953 type.go:168] "Request Body" body=""
	I1210 07:44:35.163341  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:35.163701  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:35.663625  412953 type.go:168] "Request Body" body=""
	I1210 07:44:35.663709  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:35.664060  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:36.162762  412953 type.go:168] "Request Body" body=""
	I1210 07:44:36.162838  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:36.163191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:36.663557  412953 type.go:168] "Request Body" body=""
	I1210 07:44:36.663625  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:36.663891  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:36.663931  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:36.941565  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:36.998306  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:37.009698  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:37.009736  412953 retry.go:31] will retry after 8.728706472s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:37.163096  412953 type.go:168] "Request Body" body=""
	I1210 07:44:37.163180  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:37.163526  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:37.663088  412953 type.go:168] "Request Body" body=""
	I1210 07:44:37.663168  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:37.663465  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:38.162753  412953 type.go:168] "Request Body" body=""
	I1210 07:44:38.162830  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:38.163170  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:38.662738  412953 type.go:168] "Request Body" body=""
	I1210 07:44:38.662816  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:38.663200  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:39.162911  412953 type.go:168] "Request Body" body=""
	I1210 07:44:39.162982  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:39.163311  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:39.163365  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:39.328689  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:39.391413  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:39.391461  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:39.391479  412953 retry.go:31] will retry after 20.069023813s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:39.663623  412953 type.go:168] "Request Body" body=""
	I1210 07:44:39.663692  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:39.664007  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:40.163717  412953 type.go:168] "Request Body" body=""
	I1210 07:44:40.163789  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:40.164098  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:40.662854  412953 type.go:168] "Request Body" body=""
	I1210 07:44:40.662926  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:40.663261  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:41.163240  412953 type.go:168] "Request Body" body=""
	I1210 07:44:41.163310  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:41.163588  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:41.163638  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:41.663374  412953 type.go:168] "Request Body" body=""
	I1210 07:44:41.663448  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:41.663787  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:42.163614  412953 type.go:168] "Request Body" body=""
	I1210 07:44:42.163700  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:42.164110  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:42.662817  412953 type.go:168] "Request Body" body=""
	I1210 07:44:42.662893  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:42.663220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:43.162788  412953 type.go:168] "Request Body" body=""
	I1210 07:44:43.162930  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:43.163267  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:43.662791  412953 type.go:168] "Request Body" body=""
	I1210 07:44:43.662874  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:43.663242  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:43.663300  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:44.162963  412953 type.go:168] "Request Body" body=""
	I1210 07:44:44.163057  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:44.163345  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:44.663123  412953 type.go:168] "Request Body" body=""
	I1210 07:44:44.663199  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:44.663539  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:45.163160  412953 type.go:168] "Request Body" body=""
	I1210 07:44:45.163248  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:45.163618  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:45.663558  412953 type.go:168] "Request Body" body=""
	I1210 07:44:45.663640  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:45.663930  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:45.663983  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:45.739308  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:45.803966  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:45.804014  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:45.804032  412953 retry.go:31] will retry after 15.619557427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:46.163368  412953 type.go:168] "Request Body" body=""
	I1210 07:44:46.163449  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:46.163809  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:46.663723  412953 type.go:168] "Request Body" body=""
	I1210 07:44:46.663804  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:46.664157  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:47.162830  412953 type.go:168] "Request Body" body=""
	I1210 07:44:47.162904  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:47.163246  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:47.662803  412953 type.go:168] "Request Body" body=""
	I1210 07:44:47.662878  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:47.663201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:48.162784  412953 type.go:168] "Request Body" body=""
	I1210 07:44:48.162868  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:48.163231  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:48.163295  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:48.662914  412953 type.go:168] "Request Body" body=""
	I1210 07:44:48.662989  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:48.663322  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:49.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:44:49.162856  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:49.163180  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:49.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:44:49.662861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:49.663191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:50.162736  412953 type.go:168] "Request Body" body=""
	I1210 07:44:50.162810  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:50.163100  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:50.662994  412953 type.go:168] "Request Body" body=""
	I1210 07:44:50.663140  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:50.663480  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:50.663536  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:51.163315  412953 type.go:168] "Request Body" body=""
	I1210 07:44:51.163397  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:51.163738  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:51.663484  412953 type.go:168] "Request Body" body=""
	I1210 07:44:51.663554  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:51.663817  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:52.163592  412953 type.go:168] "Request Body" body=""
	I1210 07:44:52.163675  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:52.164024  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:52.663725  412953 type.go:168] "Request Body" body=""
	I1210 07:44:52.663805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:52.664170  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:52.664269  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:53.162915  412953 type.go:168] "Request Body" body=""
	I1210 07:44:53.162989  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:53.163353  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:53.662767  412953 type.go:168] "Request Body" body=""
	I1210 07:44:53.662844  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:53.663173  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:54.162859  412953 type.go:168] "Request Body" body=""
	I1210 07:44:54.162935  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:54.163287  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:54.663094  412953 type.go:168] "Request Body" body=""
	I1210 07:44:54.663170  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:54.663454  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:55.163141  412953 type.go:168] "Request Body" body=""
	I1210 07:44:55.163215  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:55.163544  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:55.163602  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:55.663472  412953 type.go:168] "Request Body" body=""
	I1210 07:44:55.663544  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:55.663857  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:56.163640  412953 type.go:168] "Request Body" body=""
	I1210 07:44:56.163716  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:56.163996  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:56.662721  412953 type.go:168] "Request Body" body=""
	I1210 07:44:56.662805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:56.663197  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:57.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:44:57.162851  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:57.163199  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:57.662711  412953 type.go:168] "Request Body" body=""
	I1210 07:44:57.662787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:57.663158  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:57.663213  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:58.162868  412953 type.go:168] "Request Body" body=""
	I1210 07:44:58.162941  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:58.163282  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:58.662753  412953 type.go:168] "Request Body" body=""
	I1210 07:44:58.662830  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:58.663191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:59.162735  412953 type.go:168] "Request Body" body=""
	I1210 07:44:59.162805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:59.163143  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:59.460756  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:59.515959  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:59.519313  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:59.519349  412953 retry.go:31] will retry after 28.214559207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:59.663650  412953 type.go:168] "Request Body" body=""
	I1210 07:44:59.663726  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:59.664046  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:59.664099  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:00.162860  412953 type.go:168] "Request Body" body=""
	I1210 07:45:00.162952  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:00.163293  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:00.671201  412953 type.go:168] "Request Body" body=""
	I1210 07:45:00.671283  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:00.671619  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:01.163405  412953 type.go:168] "Request Body" body=""
	I1210 07:45:01.163498  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:01.163856  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:01.424291  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:45:01.504370  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:01.504408  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:01.504426  412953 retry.go:31] will retry after 11.28420248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:01.662884  412953 type.go:168] "Request Body" body=""
	I1210 07:45:01.662972  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:01.663364  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:02.162730  412953 type.go:168] "Request Body" body=""
	I1210 07:45:02.162802  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:02.163079  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:02.163130  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:02.662859  412953 type.go:168] "Request Body" body=""
	I1210 07:45:02.662943  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:02.663293  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:03.163040  412953 type.go:168] "Request Body" body=""
	I1210 07:45:03.163132  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:03.163447  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:03.663127  412953 type.go:168] "Request Body" body=""
	I1210 07:45:03.663202  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:03.663476  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:04.162835  412953 type.go:168] "Request Body" body=""
	I1210 07:45:04.162911  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:04.163270  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:04.163341  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:04.663266  412953 type.go:168] "Request Body" body=""
	I1210 07:45:04.663342  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:04.663667  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:05.163290  412953 type.go:168] "Request Body" body=""
	I1210 07:45:05.163383  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:05.163646  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:05.663681  412953 type.go:168] "Request Body" body=""
	I1210 07:45:05.663763  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:05.664122  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:06.162850  412953 type.go:168] "Request Body" body=""
	I1210 07:45:06.162937  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:06.163320  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:06.163376  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:06.663060  412953 type.go:168] "Request Body" body=""
	I1210 07:45:06.663135  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:06.663501  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:07.163345  412953 type.go:168] "Request Body" body=""
	I1210 07:45:07.163422  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:07.163774  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:07.663630  412953 type.go:168] "Request Body" body=""
	I1210 07:45:07.663719  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:07.664148  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:08.162743  412953 type.go:168] "Request Body" body=""
	I1210 07:45:08.162831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:08.163182  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:08.662764  412953 type.go:168] "Request Body" body=""
	I1210 07:45:08.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:08.663190  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:08.663246  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:09.162931  412953 type.go:168] "Request Body" body=""
	I1210 07:45:09.163030  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:09.163428  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:09.663118  412953 type.go:168] "Request Body" body=""
	I1210 07:45:09.663196  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:09.663528  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:10.163340  412953 type.go:168] "Request Body" body=""
	I1210 07:45:10.163412  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:10.163740  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:10.663633  412953 type.go:168] "Request Body" body=""
	I1210 07:45:10.663733  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:10.664152  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:10.664210  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:11.162858  412953 type.go:168] "Request Body" body=""
	I1210 07:45:11.162931  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:11.163204  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:11.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:45:11.662878  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:11.663220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:12.162791  412953 type.go:168] "Request Body" body=""
	I1210 07:45:12.162863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:12.163204  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:12.662891  412953 type.go:168] "Request Body" body=""
	I1210 07:45:12.662962  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:12.663257  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:12.789627  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:45:12.850283  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:12.850328  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:12.850347  412953 retry.go:31] will retry after 28.725170788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:13.162732  412953 type.go:168] "Request Body" body=""
	I1210 07:45:13.162821  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:13.163228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:13.163286  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:13.662967  412953 type.go:168] "Request Body" body=""
	I1210 07:45:13.663064  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:13.663335  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:14.162706  412953 type.go:168] "Request Body" body=""
	I1210 07:45:14.162805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:14.163117  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:14.663061  412953 type.go:168] "Request Body" body=""
	I1210 07:45:14.663142  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:14.663460  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:15.163177  412953 type.go:168] "Request Body" body=""
	I1210 07:45:15.163253  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:15.163541  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:15.163586  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:15.663389  412953 type.go:168] "Request Body" body=""
	I1210 07:45:15.663465  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:15.663723  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:16.163504  412953 type.go:168] "Request Body" body=""
	I1210 07:45:16.163583  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:16.163918  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:16.663729  412953 type.go:168] "Request Body" body=""
	I1210 07:45:16.663810  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:16.664140  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:17.162749  412953 type.go:168] "Request Body" body=""
	I1210 07:45:17.162824  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:17.163138  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:17.662762  412953 type.go:168] "Request Body" body=""
	I1210 07:45:17.662834  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:17.663192  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:17.663252  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:18.162797  412953 type.go:168] "Request Body" body=""
	I1210 07:45:18.162875  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:18.163247  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:18.662659  412953 type.go:168] "Request Body" body=""
	I1210 07:45:18.662732  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:18.663041  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:19.162730  412953 type.go:168] "Request Body" body=""
	I1210 07:45:19.162836  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:19.163176  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:19.662762  412953 type.go:168] "Request Body" body=""
	I1210 07:45:19.662843  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:19.663196  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:20.162705  412953 type.go:168] "Request Body" body=""
	I1210 07:45:20.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:20.163094  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:20.163139  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:20.662988  412953 type.go:168] "Request Body" body=""
	I1210 07:45:20.663092  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:20.663402  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:21.162784  412953 type.go:168] "Request Body" body=""
	I1210 07:45:21.162867  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:21.163180  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:21.662681  412953 type.go:168] "Request Body" body=""
	I1210 07:45:21.662754  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:21.663104  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:22.162781  412953 type.go:168] "Request Body" body=""
	I1210 07:45:22.162857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:22.163226  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:22.163284  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:22.662965  412953 type.go:168] "Request Body" body=""
	I1210 07:45:22.663056  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:22.663403  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:23.162971  412953 type.go:168] "Request Body" body=""
	I1210 07:45:23.163064  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:23.163391  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:23.663117  412953 type.go:168] "Request Body" body=""
	I1210 07:45:23.663191  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:23.663529  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:24.163373  412953 type.go:168] "Request Body" body=""
	I1210 07:45:24.163447  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:24.163793  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:24.163855  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:24.663660  412953 type.go:168] "Request Body" body=""
	I1210 07:45:24.663735  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:24.664001  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:25.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:45:25.162855  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:25.163242  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:25.662975  412953 type.go:168] "Request Body" body=""
	I1210 07:45:25.663066  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:25.663402  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:26.163074  412953 type.go:168] "Request Body" body=""
	I1210 07:45:26.163154  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:26.163422  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:26.663142  412953 type.go:168] "Request Body" body=""
	I1210 07:45:26.663225  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:26.663574  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:26.663627  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:27.163384  412953 type.go:168] "Request Body" body=""
	I1210 07:45:27.163458  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:27.163787  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:27.663537  412953 type.go:168] "Request Body" body=""
	I1210 07:45:27.663608  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:27.663914  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:27.734263  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:45:27.790479  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:27.794248  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:27.794290  412953 retry.go:31] will retry after 44.751938518s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:28.162814  412953 type.go:168] "Request Body" body=""
	I1210 07:45:28.162897  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:28.163247  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:28.662786  412953 type.go:168] "Request Body" body=""
	I1210 07:45:28.662878  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:28.663225  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:29.162904  412953 type.go:168] "Request Body" body=""
	I1210 07:45:29.162995  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:29.163308  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:29.163369  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:29.662782  412953 type.go:168] "Request Body" body=""
	I1210 07:45:29.662858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:29.663198  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:30.162922  412953 type.go:168] "Request Body" body=""
	I1210 07:45:30.162996  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:30.163332  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:30.663188  412953 type.go:168] "Request Body" body=""
	I1210 07:45:30.663257  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:30.663510  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:31.163355  412953 type.go:168] "Request Body" body=""
	I1210 07:45:31.163426  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:31.163801  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:31.163859  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:31.663621  412953 type.go:168] "Request Body" body=""
	I1210 07:45:31.663699  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:31.664023  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:32.163649  412953 type.go:168] "Request Body" body=""
	I1210 07:45:32.163722  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:32.163979  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:32.662687  412953 type.go:168] "Request Body" body=""
	I1210 07:45:32.662761  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:32.663068  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:33.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:45:33.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:33.163206  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:33.662712  412953 type.go:168] "Request Body" body=""
	I1210 07:45:33.662785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:33.663080  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:33.663132  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:34.162759  412953 type.go:168] "Request Body" body=""
	I1210 07:45:34.162838  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:34.163153  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:34.662951  412953 type.go:168] "Request Body" body=""
	I1210 07:45:34.663056  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:34.663400  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:35.162743  412953 type.go:168] "Request Body" body=""
	I1210 07:45:35.162820  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:35.163221  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:35.663143  412953 type.go:168] "Request Body" body=""
	I1210 07:45:35.663217  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:35.663541  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:35.663594  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:36.163366  412953 type.go:168] "Request Body" body=""
	I1210 07:45:36.163455  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:36.163788  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:36.663603  412953 type.go:168] "Request Body" body=""
	I1210 07:45:36.663693  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:36.664111  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:37.162779  412953 type.go:168] "Request Body" body=""
	I1210 07:45:37.162853  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:37.163211  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:37.662799  412953 type.go:168] "Request Body" body=""
	I1210 07:45:37.662884  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:37.663251  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:38.162931  412953 type.go:168] "Request Body" body=""
	I1210 07:45:38.163048  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:38.163386  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:38.163440  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:38.662746  412953 type.go:168] "Request Body" body=""
	I1210 07:45:38.662816  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:38.663178  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:39.162879  412953 type.go:168] "Request Body" body=""
	I1210 07:45:39.162950  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:39.163283  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:39.662744  412953 type.go:168] "Request Body" body=""
	I1210 07:45:39.662813  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:39.663099  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:40.162783  412953 type.go:168] "Request Body" body=""
	I1210 07:45:40.162864  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:40.163201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:40.663276  412953 type.go:168] "Request Body" body=""
	I1210 07:45:40.663359  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:40.663683  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:40.663745  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:41.163455  412953 type.go:168] "Request Body" body=""
	I1210 07:45:41.163527  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:41.163780  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:41.576476  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:45:41.640104  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:41.640160  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:41.640252  412953 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:45:41.663359  412953 type.go:168] "Request Body" body=""
	I1210 07:45:41.663436  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:41.663747  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:42.163603  412953 type.go:168] "Request Body" body=""
	I1210 07:45:42.163686  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:42.164024  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:42.663504  412953 type.go:168] "Request Body" body=""
	I1210 07:45:42.663576  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:42.663841  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:42.663882  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:43.163706  412953 type.go:168] "Request Body" body=""
	I1210 07:45:43.163778  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:43.164093  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:43.662795  412953 type.go:168] "Request Body" body=""
	I1210 07:45:43.662871  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:43.663216  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:44.163690  412953 type.go:168] "Request Body" body=""
	I1210 07:45:44.163770  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:44.164102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:44.662956  412953 type.go:168] "Request Body" body=""
	I1210 07:45:44.663053  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:44.663384  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:45.162824  412953 type.go:168] "Request Body" body=""
	I1210 07:45:45.162919  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:45.163386  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:45.163455  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:45.663325  412953 type.go:168] "Request Body" body=""
	I1210 07:45:45.663415  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:45.663778  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:46.163560  412953 type.go:168] "Request Body" body=""
	I1210 07:45:46.163643  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:46.164039  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:46.662756  412953 type.go:168] "Request Body" body=""
	I1210 07:45:46.662831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:46.663365  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:47.162707  412953 type.go:168] "Request Body" body=""
	I1210 07:45:47.162778  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:47.163063  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:47.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:45:47.662856  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:47.663225  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:47.663287  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:48.162991  412953 type.go:168] "Request Body" body=""
	I1210 07:45:48.163078  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:48.163366  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:48.662730  412953 type.go:168] "Request Body" body=""
	I1210 07:45:48.662809  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:48.663141  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:49.162823  412953 type.go:168] "Request Body" body=""
	I1210 07:45:49.162918  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:49.163244  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:49.662967  412953 type.go:168] "Request Body" body=""
	I1210 07:45:49.663054  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:49.663382  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:49.663448  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:50.162950  412953 type.go:168] "Request Body" body=""
	I1210 07:45:50.163048  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:50.163436  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:50.663184  412953 type.go:168] "Request Body" body=""
	I1210 07:45:50.663257  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:50.663570  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:51.163371  412953 type.go:168] "Request Body" body=""
	I1210 07:45:51.163453  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:51.163741  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:51.663498  412953 type.go:168] "Request Body" body=""
	I1210 07:45:51.663588  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:51.663902  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:51.663988  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:52.162782  412953 type.go:168] "Request Body" body=""
	I1210 07:45:52.162900  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:52.163274  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:52.663034  412953 type.go:168] "Request Body" body=""
	I1210 07:45:52.663107  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:52.663447  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:53.163126  412953 type.go:168] "Request Body" body=""
	I1210 07:45:53.163192  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:53.163480  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:53.663263  412953 type.go:168] "Request Body" body=""
	I1210 07:45:53.663337  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:53.663619  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:54.163395  412953 type.go:168] "Request Body" body=""
	I1210 07:45:54.163472  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:54.163802  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:54.163854  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:54.663613  412953 type.go:168] "Request Body" body=""
	I1210 07:45:54.663694  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:54.664023  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:55.163717  412953 type.go:168] "Request Body" body=""
	I1210 07:45:55.163799  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:55.164118  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:55.662998  412953 type.go:168] "Request Body" body=""
	I1210 07:45:55.663091  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:55.663450  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:56.163121  412953 type.go:168] "Request Body" body=""
	I1210 07:45:56.163190  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:56.163520  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:56.663288  412953 type.go:168] "Request Body" body=""
	I1210 07:45:56.663383  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:56.663710  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:56.663763  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:57.163429  412953 type.go:168] "Request Body" body=""
	I1210 07:45:57.163515  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:57.163853  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:57.663634  412953 type.go:168] "Request Body" body=""
	I1210 07:45:57.663715  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:57.664039  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:58.162743  412953 type.go:168] "Request Body" body=""
	I1210 07:45:58.162822  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:58.163189  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:58.662924  412953 type.go:168] "Request Body" body=""
	I1210 07:45:58.663036  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:58.663358  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:59.162715  412953 type.go:168] "Request Body" body=""
	I1210 07:45:59.162781  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:59.163074  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:59.163122  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:59.662789  412953 type.go:168] "Request Body" body=""
	I1210 07:45:59.662865  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:59.663251  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:00.162998  412953 type.go:168] "Request Body" body=""
	I1210 07:46:00.163123  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:00.163461  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:00.663540  412953 type.go:168] "Request Body" body=""
	I1210 07:46:00.663611  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:00.663865  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:01.163683  412953 type.go:168] "Request Body" body=""
	I1210 07:46:01.163757  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:01.164109  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:01.164167  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:01.662851  412953 type.go:168] "Request Body" body=""
	I1210 07:46:01.662929  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:01.663282  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:02.162953  412953 type.go:168] "Request Body" body=""
	I1210 07:46:02.163054  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:02.163311  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:02.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:46:02.662862  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:02.663241  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:03.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:46:03.162854  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:03.163220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:03.662701  412953 type.go:168] "Request Body" body=""
	I1210 07:46:03.662776  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:03.663100  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:03.663150  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:04.162777  412953 type.go:168] "Request Body" body=""
	I1210 07:46:04.162853  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:04.163234  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:04.663099  412953 type.go:168] "Request Body" body=""
	I1210 07:46:04.663184  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:04.663539  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:05.163328  412953 type.go:168] "Request Body" body=""
	I1210 07:46:05.163396  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:05.163668  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:05.663706  412953 type.go:168] "Request Body" body=""
	I1210 07:46:05.663785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:05.664109  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:05.664167  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:06.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:46:06.162864  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:06.163213  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:06.662683  412953 type.go:168] "Request Body" body=""
	I1210 07:46:06.662757  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:06.663110  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:07.162833  412953 type.go:168] "Request Body" body=""
	I1210 07:46:07.162915  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:07.163278  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:07.663001  412953 type.go:168] "Request Body" body=""
	I1210 07:46:07.663101  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:07.663452  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:08.163105  412953 type.go:168] "Request Body" body=""
	I1210 07:46:08.163173  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:08.163505  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:08.163551  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:08.663246  412953 type.go:168] "Request Body" body=""
	I1210 07:46:08.663355  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:08.663696  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:09.163360  412953 type.go:168] "Request Body" body=""
	I1210 07:46:09.163436  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:09.163764  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:09.663545  412953 type.go:168] "Request Body" body=""
	I1210 07:46:09.663613  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:09.663865  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:10.163582  412953 type.go:168] "Request Body" body=""
	I1210 07:46:10.163660  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:10.164166  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:10.164222  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:10.662978  412953 type.go:168] "Request Body" body=""
	I1210 07:46:10.663066  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:10.663429  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:11.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:46:11.162791  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:11.163072  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:11.662759  412953 type.go:168] "Request Body" body=""
	I1210 07:46:11.662839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:11.663211  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:12.162917  412953 type.go:168] "Request Body" body=""
	I1210 07:46:12.162995  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:12.163357  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:12.546948  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:46:12.609717  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:46:12.609753  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:46:12.609836  412953 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:46:12.614848  412953 out.go:179] * Enabled addons: 
	I1210 07:46:12.617540  412953 addons.go:530] duration metric: took 1m57.960111858s for enable addons: enabled=[]
	I1210 07:46:12.662919  412953 type.go:168] "Request Body" body=""
	I1210 07:46:12.663005  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:12.663288  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:12.663340  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:13.162812  412953 type.go:168] "Request Body" body=""
	I1210 07:46:13.162891  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:13.163241  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:13.662827  412953 type.go:168] "Request Body" body=""
	I1210 07:46:13.662909  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:13.663262  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:14.162721  412953 type.go:168] "Request Body" body=""
	I1210 07:46:14.162802  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:14.163126  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:14.663062  412953 type.go:168] "Request Body" body=""
	I1210 07:46:14.663130  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:14.663461  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:14.663518  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:15.163082  412953 type.go:168] "Request Body" body=""
	I1210 07:46:15.163169  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:15.163586  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:15.662688  412953 type.go:168] "Request Body" body=""
	I1210 07:46:15.662765  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:15.663046  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:16.162762  412953 type.go:168] "Request Body" body=""
	I1210 07:46:16.162843  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:16.163210  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:16.662984  412953 type.go:168] "Request Body" body=""
	I1210 07:46:16.663080  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:16.663419  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:17.162974  412953 type.go:168] "Request Body" body=""
	I1210 07:46:17.163061  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:17.163322  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:17.163365  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:17.663052  412953 type.go:168] "Request Body" body=""
	I1210 07:46:17.663135  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:17.663464  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:18.163305  412953 type.go:168] "Request Body" body=""
	I1210 07:46:18.163382  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:18.163729  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:18.663345  412953 type.go:168] "Request Body" body=""
	I1210 07:46:18.663418  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:18.663680  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:19.163480  412953 type.go:168] "Request Body" body=""
	I1210 07:46:19.163553  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:19.163943  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:19.163995  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:19.663628  412953 type.go:168] "Request Body" body=""
	I1210 07:46:19.663701  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:19.664020  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:20.162792  412953 type.go:168] "Request Body" body=""
	I1210 07:46:20.162868  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:20.163216  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:20.662994  412953 type.go:168] "Request Body" body=""
	I1210 07:46:20.663084  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:20.663424  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:21.162961  412953 type.go:168] "Request Body" body=""
	I1210 07:46:21.163061  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:21.163356  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:21.662751  412953 type.go:168] "Request Body" body=""
	I1210 07:46:21.662832  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:21.663141  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:21.663198  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:22.162889  412953 type.go:168] "Request Body" body=""
	I1210 07:46:22.162971  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:22.163363  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:22.663102  412953 type.go:168] "Request Body" body=""
	I1210 07:46:22.663178  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:22.663485  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:23.163205  412953 type.go:168] "Request Body" body=""
	I1210 07:46:23.163277  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:23.163529  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:23.663272  412953 type.go:168] "Request Body" body=""
	I1210 07:46:23.663341  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:23.663672  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:23.663725  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:24.163522  412953 type.go:168] "Request Body" body=""
	I1210 07:46:24.163596  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:24.163927  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:24.662937  412953 type.go:168] "Request Body" body=""
	I1210 07:46:24.663007  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:24.663348  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:25.162766  412953 type.go:168] "Request Body" body=""
	I1210 07:46:25.162842  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:25.163192  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:25.662762  412953 type.go:168] "Request Body" body=""
	I1210 07:46:25.662836  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:25.663200  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:26.162716  412953 type.go:168] "Request Body" body=""
	I1210 07:46:26.162788  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:26.163132  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:26.163196  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:26.662779  412953 type.go:168] "Request Body" body=""
	I1210 07:46:26.662852  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:26.663214  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:27.162966  412953 type.go:168] "Request Body" body=""
	I1210 07:46:27.163059  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:27.163400  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:27.663107  412953 type.go:168] "Request Body" body=""
	I1210 07:46:27.663178  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:27.663429  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:28.162791  412953 type.go:168] "Request Body" body=""
	I1210 07:46:28.162875  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:28.163237  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:28.163291  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:28.662824  412953 type.go:168] "Request Body" body=""
	I1210 07:46:28.662900  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:28.663270  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:29.162964  412953 type.go:168] "Request Body" body=""
	I1210 07:46:29.163094  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:29.163427  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:29.662778  412953 type.go:168] "Request Body" body=""
	I1210 07:46:29.662855  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:29.663206  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:30.162949  412953 type.go:168] "Request Body" body=""
	I1210 07:46:30.163043  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:30.163442  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:30.163519  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:30.663178  412953 type.go:168] "Request Body" body=""
	I1210 07:46:30.663251  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:30.663507  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:31.162779  412953 type.go:168] "Request Body" body=""
	I1210 07:46:31.162861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:31.163241  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:31.662779  412953 type.go:168] "Request Body" body=""
	I1210 07:46:31.662863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:31.663242  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:32.163585  412953 type.go:168] "Request Body" body=""
	I1210 07:46:32.163652  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:32.163904  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:32.163946  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:32.663710  412953 type.go:168] "Request Body" body=""
	I1210 07:46:32.663785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:32.664115  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:33.162721  412953 type.go:168] "Request Body" body=""
	I1210 07:46:33.162805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:33.163126  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:33.662701  412953 type.go:168] "Request Body" body=""
	I1210 07:46:33.662773  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:33.663075  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:34.162808  412953 type.go:168] "Request Body" body=""
	I1210 07:46:34.162883  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:34.163215  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:34.663030  412953 type.go:168] "Request Body" body=""
	I1210 07:46:34.663135  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:34.663479  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:34.663544  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:35.163296  412953 type.go:168] "Request Body" body=""
	I1210 07:46:35.163366  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:35.163632  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:35.663541  412953 type.go:168] "Request Body" body=""
	I1210 07:46:35.663615  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:35.663930  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:36.162688  412953 type.go:168] "Request Body" body=""
	I1210 07:46:36.162762  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:36.163076  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:36.662753  412953 type.go:168] "Request Body" body=""
	I1210 07:46:36.662826  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:36.663102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:37.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:46:37.162863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:37.163224  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:37.163284  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:37.662950  412953 type.go:168] "Request Body" body=""
	I1210 07:46:37.663050  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:37.663385  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:38.163090  412953 type.go:168] "Request Body" body=""
	I1210 07:46:38.163159  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:38.163458  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:38.662777  412953 type.go:168] "Request Body" body=""
	I1210 07:46:38.662848  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:38.663155  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:39.162756  412953 type.go:168] "Request Body" body=""
	I1210 07:46:39.162851  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:39.163199  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:39.662727  412953 type.go:168] "Request Body" body=""
	I1210 07:46:39.662805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:39.663239  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:39.663299  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:40.162968  412953 type.go:168] "Request Body" body=""
	I1210 07:46:40.163066  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:40.163428  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:40.663175  412953 type.go:168] "Request Body" body=""
	I1210 07:46:40.663247  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:40.663570  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:41.163330  412953 type.go:168] "Request Body" body=""
	I1210 07:46:41.163398  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:41.163670  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:41.663442  412953 type.go:168] "Request Body" body=""
	I1210 07:46:41.663514  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:41.663828  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:41.663885  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:42.163612  412953 type.go:168] "Request Body" body=""
	I1210 07:46:42.163698  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:42.164038  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:42.662671  412953 type.go:168] "Request Body" body=""
	I1210 07:46:42.662751  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:42.663040  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:43.162772  412953 type.go:168] "Request Body" body=""
	I1210 07:46:43.162852  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:43.163223  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:43.662929  412953 type.go:168] "Request Body" body=""
	I1210 07:46:43.663035  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:43.663349  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:44.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:46:44.162784  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:44.163074  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:44.163124  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:44.663067  412953 type.go:168] "Request Body" body=""
	I1210 07:46:44.663140  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:44.663477  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:45.163353  412953 type.go:168] "Request Body" body=""
	I1210 07:46:45.163451  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:45.163846  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:45.662750  412953 type.go:168] "Request Body" body=""
	I1210 07:46:45.662825  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:45.663122  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:46.162781  412953 type.go:168] "Request Body" body=""
	I1210 07:46:46.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:46.163209  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:46.163281  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:46.662789  412953 type.go:168] "Request Body" body=""
	I1210 07:46:46.662869  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:46.663244  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:47.162966  412953 type.go:168] "Request Body" body=""
	I1210 07:46:47.163051  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:47.163320  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:47.662754  412953 type.go:168] "Request Body" body=""
	I1210 07:46:47.662828  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:47.663189  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:48.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:46:48.162860  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:48.163220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:48.662721  412953 type.go:168] "Request Body" body=""
	I1210 07:46:48.662793  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:48.663100  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:48.663152  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:49.162765  412953 type.go:168] "Request Body" body=""
	I1210 07:46:49.162862  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:49.163255  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:49.662773  412953 type.go:168] "Request Body" body=""
	I1210 07:46:49.662850  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:49.663184  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:50.162728  412953 type.go:168] "Request Body" body=""
	I1210 07:46:50.162800  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:50.163110  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:50.662840  412953 type.go:168] "Request Body" body=""
	I1210 07:46:50.662940  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:50.663293  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:50.663354  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:51.163080  412953 type.go:168] "Request Body" body=""
	I1210 07:46:51.163164  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:51.163485  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:51.663128  412953 type.go:168] "Request Body" body=""
	I1210 07:46:51.663202  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:51.663559  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:52.163395  412953 type.go:168] "Request Body" body=""
	I1210 07:46:52.163475  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:52.163785  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:52.663590  412953 type.go:168] "Request Body" body=""
	I1210 07:46:52.663677  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:52.664017  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:52.664072  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:53.162668  412953 type.go:168] "Request Body" body=""
	I1210 07:46:53.162748  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:53.163064  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:53.662759  412953 type.go:168] "Request Body" body=""
	I1210 07:46:53.662839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:53.663165  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:54.162852  412953 type.go:168] "Request Body" body=""
	I1210 07:46:54.162929  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:54.163260  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:54.663174  412953 type.go:168] "Request Body" body=""
	I1210 07:46:54.663244  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:54.663519  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:55.163370  412953 type.go:168] "Request Body" body=""
	I1210 07:46:55.163453  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:55.163790  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:55.163840  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:55.663591  412953 type.go:168] "Request Body" body=""
	I1210 07:46:55.663675  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:55.664032  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:56.162718  412953 type.go:168] "Request Body" body=""
	I1210 07:46:56.162787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:56.163127  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:56.662776  412953 type.go:168] "Request Body" body=""
	I1210 07:46:56.662857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:56.663223  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:57.162920  412953 type.go:168] "Request Body" body=""
	I1210 07:46:57.162997  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:57.163350  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:57.663044  412953 type.go:168] "Request Body" body=""
	I1210 07:46:57.663119  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:57.663437  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:57.663491  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:58.162764  412953 type.go:168] "Request Body" body=""
	I1210 07:46:58.162842  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:58.163207  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:58.662916  412953 type.go:168] "Request Body" body=""
	I1210 07:46:58.662998  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:58.663346  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:59.162707  412953 type.go:168] "Request Body" body=""
	I1210 07:46:59.162786  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:59.163086  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:59.662772  412953 type.go:168] "Request Body" body=""
	I1210 07:46:59.662861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:59.663202  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:00.162842  412953 type.go:168] "Request Body" body=""
	I1210 07:47:00.162937  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:00.163453  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:00.163526  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:00.663007  412953 type.go:168] "Request Body" body=""
	I1210 07:47:00.663098  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:00.663417  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:01.162776  412953 type.go:168] "Request Body" body=""
	I1210 07:47:01.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:01.163244  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:01.662972  412953 type.go:168] "Request Body" body=""
	I1210 07:47:01.663073  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:01.663435  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:02.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:47:02.162838  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:02.163124  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:02.662798  412953 type.go:168] "Request Body" body=""
	I1210 07:47:02.662880  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:02.663234  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:02.663292  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:03.162956  412953 type.go:168] "Request Body" body=""
	I1210 07:47:03.163056  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:03.163409  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:03.663107  412953 type.go:168] "Request Body" body=""
	I1210 07:47:03.663180  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:03.663591  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:04.163359  412953 type.go:168] "Request Body" body=""
	I1210 07:47:04.163439  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:04.163771  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:04.662805  412953 type.go:168] "Request Body" body=""
	I1210 07:47:04.662883  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:04.663237  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:05.162670  412953 type.go:168] "Request Body" body=""
	I1210 07:47:05.162740  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:05.163054  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:05.163104  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:05.662994  412953 type.go:168] "Request Body" body=""
	I1210 07:47:05.663093  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:05.663427  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:06.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:47:06.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:06.163239  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:06.662861  412953 type.go:168] "Request Body" body=""
	I1210 07:47:06.662927  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:06.663190  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:07.162855  412953 type.go:168] "Request Body" body=""
	I1210 07:47:07.162985  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:07.163313  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:07.163372  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:07.662758  412953 type.go:168] "Request Body" body=""
	I1210 07:47:07.662832  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:07.663175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:08.162737  412953 type.go:168] "Request Body" body=""
	I1210 07:47:08.162808  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:08.163134  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:08.662744  412953 type.go:168] "Request Body" body=""
	I1210 07:47:08.662821  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:08.663156  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:09.162866  412953 type.go:168] "Request Body" body=""
	I1210 07:47:09.162942  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:09.163277  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:09.662758  412953 type.go:168] "Request Body" body=""
	I1210 07:47:09.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:09.663155  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:09.663202  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:10.162922  412953 type.go:168] "Request Body" body=""
	I1210 07:47:10.163005  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:10.163368  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:10.662972  412953 type.go:168] "Request Body" body=""
	I1210 07:47:10.663067  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:10.663391  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:11.163075  412953 type.go:168] "Request Body" body=""
	I1210 07:47:11.163169  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:11.163529  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:11.663298  412953 type.go:168] "Request Body" body=""
	I1210 07:47:11.663381  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:11.663711  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:11.663770  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:12.163545  412953 type.go:168] "Request Body" body=""
	I1210 07:47:12.163626  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:12.163962  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:12.663589  412953 type.go:168] "Request Body" body=""
	I1210 07:47:12.663663  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:12.663928  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:13.163691  412953 type.go:168] "Request Body" body=""
	I1210 07:47:13.163763  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:13.164095  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:13.662728  412953 type.go:168] "Request Body" body=""
	I1210 07:47:13.662802  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:13.663152  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:14.162755  412953 type.go:168] "Request Body" body=""
	I1210 07:47:14.162827  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:14.163128  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:14.163174  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:14.663040  412953 type.go:168] "Request Body" body=""
	I1210 07:47:14.663122  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:14.663453  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:15.163172  412953 type.go:168] "Request Body" body=""
	I1210 07:47:15.163245  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:15.163583  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:15.663556  412953 type.go:168] "Request Body" body=""
	I1210 07:47:15.663626  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:15.663897  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:16.162681  412953 type.go:168] "Request Body" body=""
	I1210 07:47:16.162760  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:16.163086  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:16.662812  412953 type.go:168] "Request Body" body=""
	I1210 07:47:16.662888  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:16.663240  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:16.663299  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:17.162710  412953 type.go:168] "Request Body" body=""
	I1210 07:47:17.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:17.163065  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:17.662764  412953 type.go:168] "Request Body" body=""
	I1210 07:47:17.662843  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:17.663184  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:18.162904  412953 type.go:168] "Request Body" body=""
	I1210 07:47:18.162985  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:18.163341  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:18.662709  412953 type.go:168] "Request Body" body=""
	I1210 07:47:18.662779  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:18.663072  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:19.162752  412953 type.go:168] "Request Body" body=""
	I1210 07:47:19.162825  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:19.163160  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:19.163217  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:19.662756  412953 type.go:168] "Request Body" body=""
	I1210 07:47:19.662831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:19.663169  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:20.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:47:20.162785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:20.163102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:20.662785  412953 type.go:168] "Request Body" body=""
	I1210 07:47:20.662863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:20.663210  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:21.162946  412953 type.go:168] "Request Body" body=""
	I1210 07:47:21.163041  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:21.163356  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:21.163416  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:21.662705  412953 type.go:168] "Request Body" body=""
	I1210 07:47:21.662777  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:21.663055  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:22.162766  412953 type.go:168] "Request Body" body=""
	I1210 07:47:22.162839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:22.163199  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:22.662780  412953 type.go:168] "Request Body" body=""
	I1210 07:47:22.662854  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:22.663212  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:23.163429  412953 type.go:168] "Request Body" body=""
	I1210 07:47:23.163503  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:23.163746  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:23.163785  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:23.663518  412953 type.go:168] "Request Body" body=""
	I1210 07:47:23.663593  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:23.663930  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:24.163583  412953 type.go:168] "Request Body" body=""
	I1210 07:47:24.163661  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:24.164012  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:24.662895  412953 type.go:168] "Request Body" body=""
	I1210 07:47:24.662963  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:24.663238  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:25.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:47:25.162856  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:25.163189  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:25.662776  412953 type.go:168] "Request Body" body=""
	I1210 07:47:25.662854  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:25.663185  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:25.663235  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:26.162705  412953 type.go:168] "Request Body" body=""
	I1210 07:47:26.162780  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:26.163095  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:26.662818  412953 type.go:168] "Request Body" body=""
	I1210 07:47:26.662889  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:26.663264  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:27.162833  412953 type.go:168] "Request Body" body=""
	I1210 07:47:27.162913  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:27.163219  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:27.662705  412953 type.go:168] "Request Body" body=""
	I1210 07:47:27.662774  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:27.663040  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:28.162741  412953 type.go:168] "Request Body" body=""
	I1210 07:47:28.162822  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:28.163180  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:28.163235  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:28.662765  412953 type.go:168] "Request Body" body=""
	I1210 07:47:28.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:28.663191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:29.163595  412953 type.go:168] "Request Body" body=""
	I1210 07:47:29.163681  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:29.163938  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:29.663716  412953 type.go:168] "Request Body" body=""
	I1210 07:47:29.663791  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:29.664120  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:30.162802  412953 type.go:168] "Request Body" body=""
	I1210 07:47:30.162881  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:30.163228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:30.163277  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:30.662975  412953 type.go:168] "Request Body" body=""
	I1210 07:47:30.663060  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:30.663309  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:31.162750  412953 type.go:168] "Request Body" body=""
	I1210 07:47:31.162823  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:31.163180  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:31.662903  412953 type.go:168] "Request Body" body=""
	I1210 07:47:31.662983  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:31.663348  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:32.163059  412953 type.go:168] "Request Body" body=""
	I1210 07:47:32.163151  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:32.163486  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:32.163538  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:32.663271  412953 type.go:168] "Request Body" body=""
	I1210 07:47:32.663354  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:32.663709  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:33.163521  412953 type.go:168] "Request Body" body=""
	I1210 07:47:33.163606  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:33.163923  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:33.663552  412953 type.go:168] "Request Body" body=""
	I1210 07:47:33.663621  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:33.663890  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:34.162672  412953 type.go:168] "Request Body" body=""
	I1210 07:47:34.162756  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:34.163144  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:34.662973  412953 type.go:168] "Request Body" body=""
	I1210 07:47:34.663067  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:34.663388  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:34.663446  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:35.163091  412953 type.go:168] "Request Body" body=""
	I1210 07:47:35.163163  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:35.163426  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:35.663452  412953 type.go:168] "Request Body" body=""
	I1210 07:47:35.663527  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:35.663843  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:36.163648  412953 type.go:168] "Request Body" body=""
	I1210 07:47:36.163726  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:36.164083  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:36.662824  412953 type.go:168] "Request Body" body=""
	I1210 07:47:36.662911  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:36.663202  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:37.162888  412953 type.go:168] "Request Body" body=""
	I1210 07:47:37.162961  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:37.163292  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:37.163351  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:37.663060  412953 type.go:168] "Request Body" body=""
	I1210 07:47:37.663139  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:37.663481  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:38.163223  412953 type.go:168] "Request Body" body=""
	I1210 07:47:38.163297  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:38.163629  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:38.663452  412953 type.go:168] "Request Body" body=""
	I1210 07:47:38.663533  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:38.663884  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:39.163604  412953 type.go:168] "Request Body" body=""
	I1210 07:47:39.163682  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:39.164033  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:39.164085  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:39.662709  412953 type.go:168] "Request Body" body=""
	I1210 07:47:39.662798  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:39.663102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:40.162773  412953 type.go:168] "Request Body" body=""
	I1210 07:47:40.162850  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:40.163206  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:40.663034  412953 type.go:168] "Request Body" body=""
	I1210 07:47:40.663108  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:40.663451  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:41.162787  412953 type.go:168] "Request Body" body=""
	I1210 07:47:41.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:41.163166  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:41.662816  412953 type.go:168] "Request Body" body=""
	I1210 07:47:41.662924  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:41.663355  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:41.663428  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:42.162833  412953 type.go:168] "Request Body" body=""
	I1210 07:47:42.162931  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:42.163383  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:42.662719  412953 type.go:168] "Request Body" body=""
	I1210 07:47:42.662787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:42.663059  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:43.162773  412953 type.go:168] "Request Body" body=""
	I1210 07:47:43.162848  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:43.163226  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:43.662792  412953 type.go:168] "Request Body" body=""
	I1210 07:47:43.662865  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:43.663241  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:44.162732  412953 type.go:168] "Request Body" body=""
	I1210 07:47:44.162801  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:44.163080  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:44.163121  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:44.663044  412953 type.go:168] "Request Body" body=""
	I1210 07:47:44.663116  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:44.663433  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:45.163203  412953 type.go:168] "Request Body" body=""
	I1210 07:47:45.163296  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:45.163726  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:45.662709  412953 type.go:168] "Request Body" body=""
	I1210 07:47:45.662794  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:45.663175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:46.162894  412953 type.go:168] "Request Body" body=""
	I1210 07:47:46.162972  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:46.163291  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:46.163350  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:46.662780  412953 type.go:168] "Request Body" body=""
	I1210 07:47:46.662853  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:46.663154  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:47.162673  412953 type.go:168] "Request Body" body=""
	I1210 07:47:47.162741  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:47.163000  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:47.662781  412953 type.go:168] "Request Body" body=""
	I1210 07:47:47.662882  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:47.663282  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:48.163055  412953 type.go:168] "Request Body" body=""
	I1210 07:47:48.163152  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:48.163450  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:48.163501  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:48.663148  412953 type.go:168] "Request Body" body=""
	I1210 07:47:48.663224  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:48.663521  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:49.163373  412953 type.go:168] "Request Body" body=""
	I1210 07:47:49.163457  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:49.163828  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:49.663672  412953 type.go:168] "Request Body" body=""
	I1210 07:47:49.663767  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:49.664111  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:50.163649  412953 type.go:168] "Request Body" body=""
	I1210 07:47:50.163722  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:50.163974  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:50.164024  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:50.662959  412953 type.go:168] "Request Body" body=""
	I1210 07:47:50.663053  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:50.663390  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:51.163109  412953 type.go:168] "Request Body" body=""
	I1210 07:47:51.163226  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:51.163573  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:51.663350  412953 type.go:168] "Request Body" body=""
	I1210 07:47:51.663424  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:51.663681  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:52.163449  412953 type.go:168] "Request Body" body=""
	I1210 07:47:52.163526  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:52.163814  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:52.663598  412953 type.go:168] "Request Body" body=""
	I1210 07:47:52.663677  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:52.664036  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:52.664093  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:53.162754  412953 type.go:168] "Request Body" body=""
	I1210 07:47:53.162833  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:53.163184  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:53.662766  412953 type.go:168] "Request Body" body=""
	I1210 07:47:53.662839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:53.663209  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:54.162947  412953 type.go:168] "Request Body" body=""
	I1210 07:47:54.163051  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:54.163402  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:54.663120  412953 type.go:168] "Request Body" body=""
	I1210 07:47:54.663194  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:54.663486  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:55.163337  412953 type.go:168] "Request Body" body=""
	I1210 07:47:55.163413  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:55.163797  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:55.163853  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:55.663646  412953 type.go:168] "Request Body" body=""
	I1210 07:47:55.663720  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:55.664034  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:56.162701  412953 type.go:168] "Request Body" body=""
	I1210 07:47:56.162774  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:56.163098  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:56.662775  412953 type.go:168] "Request Body" body=""
	I1210 07:47:56.662859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:56.663186  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:57.162786  412953 type.go:168] "Request Body" body=""
	I1210 07:47:57.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:57.163228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:57.663510  412953 type.go:168] "Request Body" body=""
	I1210 07:47:57.663581  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:57.663872  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:57.663929  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:58.163725  412953 type.go:168] "Request Body" body=""
	I1210 07:47:58.163813  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:58.164168  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:58.662866  412953 type.go:168] "Request Body" body=""
	I1210 07:47:58.662937  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:58.663291  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:59.163430  412953 type.go:168] "Request Body" body=""
	I1210 07:47:59.163502  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:59.163755  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:59.663519  412953 type.go:168] "Request Body" body=""
	I1210 07:47:59.663597  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:59.663931  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:59.663987  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:00.163719  412953 type.go:168] "Request Body" body=""
	I1210 07:48:00.163813  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:00.164225  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:00.663315  412953 type.go:168] "Request Body" body=""
	I1210 07:48:00.663399  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:00.663675  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:01.163489  412953 type.go:168] "Request Body" body=""
	I1210 07:48:01.163567  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:01.163893  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:01.663702  412953 type.go:168] "Request Body" body=""
	I1210 07:48:01.663781  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:01.664135  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:01.664198  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:02.162817  412953 type.go:168] "Request Body" body=""
	I1210 07:48:02.162886  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:02.163184  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:02.662884  412953 type.go:168] "Request Body" body=""
	I1210 07:48:02.662971  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:02.663338  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:03.162784  412953 type.go:168] "Request Body" body=""
	I1210 07:48:03.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:03.163213  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:03.662759  412953 type.go:168] "Request Body" body=""
	I1210 07:48:03.662836  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:03.663120  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:04.162778  412953 type.go:168] "Request Body" body=""
	I1210 07:48:04.162852  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:04.163191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:04.163249  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:04.663059  412953 type.go:168] "Request Body" body=""
	I1210 07:48:04.663141  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:04.663463  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:05.162715  412953 type.go:168] "Request Body" body=""
	I1210 07:48:05.162795  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:05.163131  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:05.663057  412953 type.go:168] "Request Body" body=""
	I1210 07:48:05.663143  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:05.663537  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:06.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:48:06.162864  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:06.163194  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:06.662713  412953 type.go:168] "Request Body" body=""
	I1210 07:48:06.662786  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:06.663074  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:06.663118  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:07.162740  412953 type.go:168] "Request Body" body=""
	I1210 07:48:07.162814  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:07.163183  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:07.662797  412953 type.go:168] "Request Body" body=""
	I1210 07:48:07.662880  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:07.663236  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:08.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:48:08.162871  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:08.163193  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:08.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:48:08.662859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:08.663191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:08.663248  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:09.162797  412953 type.go:168] "Request Body" body=""
	I1210 07:48:09.162876  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:09.163240  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:09.662795  412953 type.go:168] "Request Body" body=""
	I1210 07:48:09.662869  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:09.663133  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:10.162749  412953 type.go:168] "Request Body" body=""
	I1210 07:48:10.162821  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:10.163157  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:10.662788  412953 type.go:168] "Request Body" body=""
	I1210 07:48:10.662861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:10.663447  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:10.663517  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:11.163125  412953 type.go:168] "Request Body" body=""
	I1210 07:48:11.163197  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:11.163452  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:11.663284  412953 type.go:168] "Request Body" body=""
	I1210 07:48:11.663356  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:11.663683  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:12.163495  412953 type.go:168] "Request Body" body=""
	I1210 07:48:12.163578  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:12.163924  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:12.663702  412953 type.go:168] "Request Body" body=""
	I1210 07:48:12.663773  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:12.664091  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:12.664142  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:13.162781  412953 type.go:168] "Request Body" body=""
	I1210 07:48:13.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:13.163174  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:13.662779  412953 type.go:168] "Request Body" body=""
	I1210 07:48:13.662857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:13.663203  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:14.162879  412953 type.go:168] "Request Body" body=""
	I1210 07:48:14.162951  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:14.163288  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:14.663062  412953 type.go:168] "Request Body" body=""
	I1210 07:48:14.663137  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:14.663477  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:15.163290  412953 type.go:168] "Request Body" body=""
	I1210 07:48:15.163371  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:15.163703  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:15.163759  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:15.662995  412953 type.go:168] "Request Body" body=""
	I1210 07:48:15.663090  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:15.663504  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:16.163293  412953 type.go:168] "Request Body" body=""
	I1210 07:48:16.163371  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:16.163704  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:16.663391  412953 type.go:168] "Request Body" body=""
	I1210 07:48:16.663468  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:16.663824  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:17.163573  412953 type.go:168] "Request Body" body=""
	I1210 07:48:17.163645  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:17.163900  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:17.163949  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:17.662654  412953 type.go:168] "Request Body" body=""
	I1210 07:48:17.662730  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:17.663090  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:18.162799  412953 type.go:168] "Request Body" body=""
	I1210 07:48:18.162871  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:18.163220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:18.662707  412953 type.go:168] "Request Body" body=""
	I1210 07:48:18.662777  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:18.663073  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:19.162733  412953 type.go:168] "Request Body" body=""
	I1210 07:48:19.162808  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:19.163121  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:19.662805  412953 type.go:168] "Request Body" body=""
	I1210 07:48:19.662883  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:19.663227  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:19.663286  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:20.162738  412953 type.go:168] "Request Body" body=""
	I1210 07:48:20.162812  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:20.163160  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:20.662944  412953 type.go:168] "Request Body" body=""
	I1210 07:48:20.663033  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:20.663345  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:21.162798  412953 type.go:168] "Request Body" body=""
	I1210 07:48:21.162875  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:21.163231  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:21.662783  412953 type.go:168] "Request Body" body=""
	I1210 07:48:21.662865  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:21.663168  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:22.162769  412953 type.go:168] "Request Body" body=""
	I1210 07:48:22.162847  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:22.163187  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:22.163245  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:22.662786  412953 type.go:168] "Request Body" body=""
	I1210 07:48:22.662860  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:22.663219  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:23.162659  412953 type.go:168] "Request Body" body=""
	I1210 07:48:23.162735  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:23.163055  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:23.662748  412953 type.go:168] "Request Body" body=""
	I1210 07:48:23.662823  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:23.663195  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:24.162905  412953 type.go:168] "Request Body" body=""
	I1210 07:48:24.162979  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:24.163329  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:24.163388  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:24.662665  412953 type.go:168] "Request Body" body=""
	I1210 07:48:24.662746  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:24.662999  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:25.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:48:25.162867  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:25.163229  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:25.662989  412953 type.go:168] "Request Body" body=""
	I1210 07:48:25.663082  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:25.663400  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:26.162934  412953 type.go:168] "Request Body" body=""
	I1210 07:48:26.163003  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:26.163282  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:26.662757  412953 type.go:168] "Request Body" body=""
	I1210 07:48:26.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:26.663211  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:26.663268  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:27.162792  412953 type.go:168] "Request Body" body=""
	I1210 07:48:27.162873  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:27.163238  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:27.662787  412953 type.go:168] "Request Body" body=""
	I1210 07:48:27.662857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:27.663143  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:28.162822  412953 type.go:168] "Request Body" body=""
	I1210 07:48:28.162896  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:28.163266  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:28.662851  412953 type.go:168] "Request Body" body=""
	I1210 07:48:28.662926  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:28.663274  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:28.663332  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:29.162716  412953 type.go:168] "Request Body" body=""
	I1210 07:48:29.162788  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:29.163115  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:29.662940  412953 type.go:168] "Request Body" body=""
	I1210 07:48:29.663035  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:29.663342  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:30.162806  412953 type.go:168] "Request Body" body=""
	I1210 07:48:30.162885  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:30.163215  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:30.663057  412953 type.go:168] "Request Body" body=""
	I1210 07:48:30.663131  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:30.663453  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:30.663521  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:31.162771  412953 type.go:168] "Request Body" body=""
	I1210 07:48:31.162848  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:31.163188  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:31.662737  412953 type.go:168] "Request Body" body=""
	I1210 07:48:31.662810  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:31.663134  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:32.162725  412953 type.go:168] "Request Body" body=""
	I1210 07:48:32.162795  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:32.163156  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:32.662781  412953 type.go:168] "Request Body" body=""
	I1210 07:48:32.662857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:32.663205  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:33.162800  412953 type.go:168] "Request Body" body=""
	I1210 07:48:33.162878  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:33.163211  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:33.163260  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:33.662739  412953 type.go:168] "Request Body" body=""
	I1210 07:48:33.662809  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:33.663092  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:34.162756  412953 type.go:168] "Request Body" body=""
	I1210 07:48:34.162829  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:34.163174  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:34.663106  412953 type.go:168] "Request Body" body=""
	I1210 07:48:34.663176  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:34.663513  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:35.163270  412953 type.go:168] "Request Body" body=""
	I1210 07:48:35.163336  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:35.163605  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:35.163645  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:35.663599  412953 type.go:168] "Request Body" body=""
	I1210 07:48:35.663677  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:35.663980  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:36.162698  412953 type.go:168] "Request Body" body=""
	I1210 07:48:36.162778  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:36.163088  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:36.662771  412953 type.go:168] "Request Body" body=""
	I1210 07:48:36.662845  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:36.663185  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:37.162797  412953 type.go:168] "Request Body" body=""
	I1210 07:48:37.162873  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:37.163228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:37.662944  412953 type.go:168] "Request Body" body=""
	I1210 07:48:37.663046  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:37.663363  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:37.663421  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:38.162719  412953 type.go:168] "Request Body" body=""
	I1210 07:48:38.162806  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:38.163087  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:38.662798  412953 type.go:168] "Request Body" body=""
	I1210 07:48:38.662885  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:38.663268  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:39.162981  412953 type.go:168] "Request Body" body=""
	I1210 07:48:39.163079  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:39.163418  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:39.662892  412953 type.go:168] "Request Body" body=""
	I1210 07:48:39.662959  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:39.663228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:40.162925  412953 type.go:168] "Request Body" body=""
	I1210 07:48:40.163032  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:40.163374  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:40.163433  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:40.663119  412953 type.go:168] "Request Body" body=""
	I1210 07:48:40.663199  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:40.663541  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:41.163268  412953 type.go:168] "Request Body" body=""
	I1210 07:48:41.163348  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:41.163613  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:41.663378  412953 type.go:168] "Request Body" body=""
	I1210 07:48:41.663451  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:41.663768  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:42.163629  412953 type.go:168] "Request Body" body=""
	I1210 07:48:42.163723  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:42.164102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:42.164166  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:42.662641  412953 type.go:168] "Request Body" body=""
	I1210 07:48:42.662719  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:42.663050  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:43.162792  412953 type.go:168] "Request Body" body=""
	I1210 07:48:43.162873  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:43.163217  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:43.662974  412953 type.go:168] "Request Body" body=""
	I1210 07:48:43.663077  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:43.663400  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:44.162713  412953 type.go:168] "Request Body" body=""
	I1210 07:48:44.162788  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:44.163107  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:44.663097  412953 type.go:168] "Request Body" body=""
	I1210 07:48:44.663173  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:44.663538  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:44.663598  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:45.163413  412953 type.go:168] "Request Body" body=""
	I1210 07:48:45.163521  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:45.163972  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:45.662711  412953 type.go:168] "Request Body" body=""
	I1210 07:48:45.662787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:45.663074  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:46.162828  412953 type.go:168] "Request Body" body=""
	I1210 07:48:46.162907  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:46.163281  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:46.663004  412953 type.go:168] "Request Body" body=""
	I1210 07:48:46.663101  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:46.663416  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:47.162720  412953 type.go:168] "Request Body" body=""
	I1210 07:48:47.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:47.163096  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:47.163144  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:47.662695  412953 type.go:168] "Request Body" body=""
	I1210 07:48:47.662772  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:47.663201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:48.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:48:48.162861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:48.163200  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:48.662706  412953 type.go:168] "Request Body" body=""
	I1210 07:48:48.662780  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:48.663092  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:49.162753  412953 type.go:168] "Request Body" body=""
	I1210 07:48:49.162836  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:49.163183  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:49.163230  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:49.662751  412953 type.go:168] "Request Body" body=""
	I1210 07:48:49.662831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:49.663185  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:50.162871  412953 type.go:168] "Request Body" body=""
	I1210 07:48:50.162945  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:50.163286  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:50.663056  412953 type.go:168] "Request Body" body=""
	I1210 07:48:50.663136  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:50.663481  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:51.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:48:51.162872  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:51.163240  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:51.163304  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:51.662756  412953 type.go:168] "Request Body" body=""
	I1210 07:48:51.662828  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:51.663133  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:52.162821  412953 type.go:168] "Request Body" body=""
	I1210 07:48:52.162901  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:52.163251  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:52.662959  412953 type.go:168] "Request Body" body=""
	I1210 07:48:52.663046  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:52.663381  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:53.163082  412953 type.go:168] "Request Body" body=""
	I1210 07:48:53.163172  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:53.163460  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:53.163528  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:53.663234  412953 type.go:168] "Request Body" body=""
	I1210 07:48:53.663307  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:53.663639  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:54.163275  412953 type.go:168] "Request Body" body=""
	I1210 07:48:54.163349  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:54.163662  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:54.663667  412953 type.go:168] "Request Body" body=""
	I1210 07:48:54.663741  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:54.663998  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:55.162712  412953 type.go:168] "Request Body" body=""
	I1210 07:48:55.162787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:55.163138  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:55.663042  412953 type.go:168] "Request Body" body=""
	I1210 07:48:55.663118  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:55.663430  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:55.663490  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:56.163120  412953 type.go:168] "Request Body" body=""
	I1210 07:48:56.163192  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:56.163444  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:56.663167  412953 type.go:168] "Request Body" body=""
	I1210 07:48:56.663247  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:56.663592  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:57.163414  412953 type.go:168] "Request Body" body=""
	I1210 07:48:57.163491  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:57.163814  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:57.663518  412953 type.go:168] "Request Body" body=""
	I1210 07:48:57.663593  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:57.663859  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:57.663899  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:58.163700  412953 type.go:168] "Request Body" body=""
	I1210 07:48:58.163775  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:58.164099  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:58.662798  412953 type.go:168] "Request Body" body=""
	I1210 07:48:58.662883  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:58.663263  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:59.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:48:59.162788  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:59.163088  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:59.662782  412953 type.go:168] "Request Body" body=""
	I1210 07:48:59.662866  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:59.663288  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:00.162837  412953 type.go:168] "Request Body" body=""
	I1210 07:49:00.162924  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:00.163302  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:00.163360  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:00.663222  412953 type.go:168] "Request Body" body=""
	I1210 07:49:00.663294  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:00.663553  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:01.163418  412953 type.go:168] "Request Body" body=""
	I1210 07:49:01.163505  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:01.163868  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:01.663688  412953 type.go:168] "Request Body" body=""
	I1210 07:49:01.663772  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:01.664129  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:02.162731  412953 type.go:168] "Request Body" body=""
	I1210 07:49:02.162806  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:02.163127  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:02.662770  412953 type.go:168] "Request Body" body=""
	I1210 07:49:02.662851  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:02.663217  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:02.663293  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:03.162959  412953 type.go:168] "Request Body" body=""
	I1210 07:49:03.163058  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:03.163386  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:03.663598  412953 type.go:168] "Request Body" body=""
	I1210 07:49:03.663674  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:03.663952  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:04.163712  412953 type.go:168] "Request Body" body=""
	I1210 07:49:04.163781  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:04.164105  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:04.663090  412953 type.go:168] "Request Body" body=""
	I1210 07:49:04.663166  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:04.663491  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:04.663550  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:05.163267  412953 type.go:168] "Request Body" body=""
	I1210 07:49:05.163335  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:05.163606  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:05.663590  412953 type.go:168] "Request Body" body=""
	I1210 07:49:05.663661  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:05.663988  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:06.162730  412953 type.go:168] "Request Body" body=""
	I1210 07:49:06.162809  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:06.163147  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:06.662702  412953 type.go:168] "Request Body" body=""
	I1210 07:49:06.662772  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:06.663090  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:07.162769  412953 type.go:168] "Request Body" body=""
	I1210 07:49:07.162842  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:07.163224  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:07.163284  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:07.662818  412953 type.go:168] "Request Body" body=""
	I1210 07:49:07.662894  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:07.663261  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:08.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:49:08.162860  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:08.163175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:08.662740  412953 type.go:168] "Request Body" body=""
	I1210 07:49:08.662817  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:08.663150  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:09.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:49:09.162845  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:09.163192  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:09.662749  412953 type.go:168] "Request Body" body=""
	I1210 07:49:09.662818  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:09.663142  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:09.663197  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:10.162737  412953 type.go:168] "Request Body" body=""
	I1210 07:49:10.162814  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:10.163183  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:10.662970  412953 type.go:168] "Request Body" body=""
	I1210 07:49:10.663057  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:10.663346  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:11.162920  412953 type.go:168] "Request Body" body=""
	I1210 07:49:11.162997  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:11.163332  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:11.662882  412953 type.go:168] "Request Body" body=""
	I1210 07:49:11.662959  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:11.663292  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:11.663351  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:12.163033  412953 type.go:168] "Request Body" body=""
	I1210 07:49:12.163107  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:12.163439  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:12.663269  412953 type.go:168] "Request Body" body=""
	I1210 07:49:12.663343  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:12.663599  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:13.163380  412953 type.go:168] "Request Body" body=""
	I1210 07:49:13.163458  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:13.163773  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:13.663450  412953 type.go:168] "Request Body" body=""
	I1210 07:49:13.663530  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:13.663864  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:13.663928  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:14.163662  412953 type.go:168] "Request Body" body=""
	I1210 07:49:14.163735  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:14.163995  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:14.663028  412953 type.go:168] "Request Body" body=""
	I1210 07:49:14.663102  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:14.663407  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:15.162751  412953 type.go:168] "Request Body" body=""
	I1210 07:49:15.162828  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:15.163189  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:15.662925  412953 type.go:168] "Request Body" body=""
	I1210 07:49:15.662994  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:15.663281  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:16.162756  412953 type.go:168] "Request Body" body=""
	I1210 07:49:16.162841  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:16.163429  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:16.163485  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:16.663183  412953 type.go:168] "Request Body" body=""
	I1210 07:49:16.663269  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:16.663642  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:17.163441  412953 type.go:168] "Request Body" body=""
	I1210 07:49:17.163510  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:17.163771  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:17.663499  412953 type.go:168] "Request Body" body=""
	I1210 07:49:17.663586  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:17.663939  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:18.163578  412953 type.go:168] "Request Body" body=""
	I1210 07:49:18.163666  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:18.164004  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:18.164061  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:18.662710  412953 type.go:168] "Request Body" body=""
	I1210 07:49:18.662778  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:18.663116  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:19.162846  412953 type.go:168] "Request Body" body=""
	I1210 07:49:19.162918  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:19.163279  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:19.662981  412953 type.go:168] "Request Body" body=""
	I1210 07:49:19.663079  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:19.663449  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:20.163115  412953 type.go:168] "Request Body" body=""
	I1210 07:49:20.163185  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:20.163485  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:20.662991  412953 type.go:168] "Request Body" body=""
	I1210 07:49:20.663087  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:20.663401  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:20.663459  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:21.163107  412953 type.go:168] "Request Body" body=""
	I1210 07:49:21.163187  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:21.163503  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:21.663312  412953 type.go:168] "Request Body" body=""
	I1210 07:49:21.663390  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:21.663648  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:22.163403  412953 type.go:168] "Request Body" body=""
	I1210 07:49:22.163478  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:22.163806  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:22.663492  412953 type.go:168] "Request Body" body=""
	I1210 07:49:22.663572  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:22.663904  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:22.663956  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:23.163518  412953 type.go:168] "Request Body" body=""
	I1210 07:49:23.163585  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:23.163836  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:23.663583  412953 type.go:168] "Request Body" body=""
	I1210 07:49:23.663658  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:23.663963  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:24.162717  412953 type.go:168] "Request Body" body=""
	I1210 07:49:24.162795  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:24.163164  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:24.663159  412953 type.go:168] "Request Body" body=""
	I1210 07:49:24.663235  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:24.663577  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:25.163338  412953 type.go:168] "Request Body" body=""
	I1210 07:49:25.163416  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:25.163741  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:25.163799  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:25.663562  412953 type.go:168] "Request Body" body=""
	I1210 07:49:25.663632  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:25.663965  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:26.162670  412953 type.go:168] "Request Body" body=""
	I1210 07:49:26.162742  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:26.162992  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:26.662713  412953 type.go:168] "Request Body" body=""
	I1210 07:49:26.662787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:26.663115  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:27.162844  412953 type.go:168] "Request Body" body=""
	I1210 07:49:27.162918  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:27.163264  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:27.662706  412953 type.go:168] "Request Body" body=""
	I1210 07:49:27.662777  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:27.663066  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:27.663116  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:28.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:49:28.162842  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:28.163197  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:28.662785  412953 type.go:168] "Request Body" body=""
	I1210 07:49:28.662870  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:28.663195  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:29.162663  412953 type.go:168] "Request Body" body=""
	I1210 07:49:29.162733  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:29.163065  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:29.662764  412953 type.go:168] "Request Body" body=""
	I1210 07:49:29.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:29.663210  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:29.663268  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:30.162965  412953 type.go:168] "Request Body" body=""
	I1210 07:49:30.163064  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:30.163387  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:30.663348  412953 type.go:168] "Request Body" body=""
	I1210 07:49:30.663415  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:30.663681  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:31.163525  412953 type.go:168] "Request Body" body=""
	I1210 07:49:31.163602  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:31.163937  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:31.662666  412953 type.go:168] "Request Body" body=""
	I1210 07:49:31.662743  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:31.663114  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:32.162802  412953 type.go:168] "Request Body" body=""
	I1210 07:49:32.162874  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:32.163205  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:32.163272  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:32.662932  412953 type.go:168] "Request Body" body=""
	I1210 07:49:32.663029  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:32.663371  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:33.162797  412953 type.go:168] "Request Body" body=""
	I1210 07:49:33.162872  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:33.163215  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:33.662690  412953 type.go:168] "Request Body" body=""
	I1210 07:49:33.662765  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:33.663040  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:34.162731  412953 type.go:168] "Request Body" body=""
	I1210 07:49:34.162811  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:34.163174  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:34.663120  412953 type.go:168] "Request Body" body=""
	I1210 07:49:34.663195  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:34.663560  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:34.663615  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:35.163345  412953 type.go:168] "Request Body" body=""
	I1210 07:49:35.163419  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:35.163698  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:35.663698  412953 type.go:168] "Request Body" body=""
	I1210 07:49:35.663775  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:35.664098  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:36.162816  412953 type.go:168] "Request Body" body=""
	I1210 07:49:36.162893  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:36.163232  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:36.662691  412953 type.go:168] "Request Body" body=""
	I1210 07:49:36.662766  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:36.663041  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:37.162771  412953 type.go:168] "Request Body" body=""
	I1210 07:49:37.162853  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:37.163191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:37.163246  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:37.662782  412953 type.go:168] "Request Body" body=""
	I1210 07:49:37.662863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:37.663181  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:38.162784  412953 type.go:168] "Request Body" body=""
	I1210 07:49:38.162866  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:38.163339  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:38.663026  412953 type.go:168] "Request Body" body=""
	I1210 07:49:38.663107  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:38.663399  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:39.162747  412953 type.go:168] "Request Body" body=""
	I1210 07:49:39.162828  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:39.163215  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:39.163277  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:39.662785  412953 type.go:168] "Request Body" body=""
	I1210 07:49:39.662860  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:39.663137  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:40.162787  412953 type.go:168] "Request Body" body=""
	I1210 07:49:40.162874  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:40.163221  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:40.663148  412953 type.go:168] "Request Body" body=""
	I1210 07:49:40.663236  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:40.663586  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:41.163335  412953 type.go:168] "Request Body" body=""
	I1210 07:49:41.163407  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:41.163738  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:41.163786  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:41.663533  412953 type.go:168] "Request Body" body=""
	I1210 07:49:41.663607  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:41.663906  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:42.163638  412953 type.go:168] "Request Body" body=""
	I1210 07:49:42.163723  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:42.164115  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:42.663519  412953 type.go:168] "Request Body" body=""
	I1210 07:49:42.663591  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:42.663894  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:43.163685  412953 type.go:168] "Request Body" body=""
	I1210 07:49:43.163760  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:43.164112  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:43.164166  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:43.662835  412953 type.go:168] "Request Body" body=""
	I1210 07:49:43.662918  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:43.663257  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:44.162716  412953 type.go:168] "Request Body" body=""
	I1210 07:49:44.162795  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:44.163078  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:44.663092  412953 type.go:168] "Request Body" body=""
	I1210 07:49:44.663175  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:44.663483  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:45.163326  412953 type.go:168] "Request Body" body=""
	I1210 07:49:45.163419  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:45.163779  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:45.662654  412953 type.go:168] "Request Body" body=""
	I1210 07:49:45.662729  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:45.663065  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:45.663117  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:46.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:49:46.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:46.163223  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:46.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:49:46.662868  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:46.663236  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:47.162751  412953 type.go:168] "Request Body" body=""
	I1210 07:49:47.162831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:47.163170  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:47.662766  412953 type.go:168] "Request Body" body=""
	I1210 07:49:47.662844  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:47.663198  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:47.663257  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:48.162790  412953 type.go:168] "Request Body" body=""
	I1210 07:49:48.162890  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:48.163263  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:48.662709  412953 type.go:168] "Request Body" body=""
	I1210 07:49:48.662776  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:48.663061  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:49.162802  412953 type.go:168] "Request Body" body=""
	I1210 07:49:49.162894  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:49.163340  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:49.662794  412953 type.go:168] "Request Body" body=""
	I1210 07:49:49.662875  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:49.663233  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:49.663295  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:50.163641  412953 type.go:168] "Request Body" body=""
	I1210 07:49:50.163727  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:50.163987  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:50.663073  412953 type.go:168] "Request Body" body=""
	I1210 07:49:50.663155  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:50.663511  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:51.163122  412953 type.go:168] "Request Body" body=""
	I1210 07:49:51.163206  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:51.163540  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:51.663260  412953 type.go:168] "Request Body" body=""
	I1210 07:49:51.663334  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:51.663585  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:51.663641  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:52.163471  412953 type.go:168] "Request Body" body=""
	I1210 07:49:52.163547  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:52.163896  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:52.663720  412953 type.go:168] "Request Body" body=""
	I1210 07:49:52.663796  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:52.664128  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:53.162655  412953 type.go:168] "Request Body" body=""
	I1210 07:49:53.162729  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:53.162984  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:53.662712  412953 type.go:168] "Request Body" body=""
	I1210 07:49:53.662785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:53.663162  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:54.162732  412953 type.go:168] "Request Body" body=""
	I1210 07:49:54.162812  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:54.163175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:54.163230  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:54.663007  412953 type.go:168] "Request Body" body=""
	I1210 07:49:54.663092  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:54.663348  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:55.162765  412953 type.go:168] "Request Body" body=""
	I1210 07:49:55.162839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:55.163203  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:55.662783  412953 type.go:168] "Request Body" body=""
	I1210 07:49:55.662865  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:55.663208  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:56.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:49:56.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:56.163134  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:56.662828  412953 type.go:168] "Request Body" body=""
	I1210 07:49:56.662903  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:56.663245  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:56.663302  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:57.162993  412953 type.go:168] "Request Body" body=""
	I1210 07:49:57.163092  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:57.163459  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:57.663133  412953 type.go:168] "Request Body" body=""
	I1210 07:49:57.663213  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:57.663561  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:58.163311  412953 type.go:168] "Request Body" body=""
	I1210 07:49:58.163387  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:58.163735  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:58.663518  412953 type.go:168] "Request Body" body=""
	I1210 07:49:58.663591  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:58.663925  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:58.663979  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:59.163560  412953 type.go:168] "Request Body" body=""
	I1210 07:49:59.163638  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:59.163958  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:59.662670  412953 type.go:168] "Request Body" body=""
	I1210 07:49:59.662749  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:59.663105  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:00.162806  412953 type.go:168] "Request Body" body=""
	I1210 07:50:00.162893  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:00.163244  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:00.663460  412953 type.go:168] "Request Body" body=""
	I1210 07:50:00.663540  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:00.664068  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:00.664196  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:01.162803  412953 type.go:168] "Request Body" body=""
	I1210 07:50:01.162896  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:01.163273  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:01.662786  412953 type.go:168] "Request Body" body=""
	I1210 07:50:01.662862  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:01.663242  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:02.163061  412953 type.go:168] "Request Body" body=""
	I1210 07:50:02.163133  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:02.163407  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:02.662773  412953 type.go:168] "Request Body" body=""
	I1210 07:50:02.662847  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:02.663177  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:03.162915  412953 type.go:168] "Request Body" body=""
	I1210 07:50:03.162990  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:03.163330  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:03.163387  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:03.662715  412953 type.go:168] "Request Body" body=""
	I1210 07:50:03.662782  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:03.663054  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:04.162790  412953 type.go:168] "Request Body" body=""
	I1210 07:50:04.162866  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:04.163214  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:04.663178  412953 type.go:168] "Request Body" body=""
	I1210 07:50:04.663273  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:04.663624  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:05.163424  412953 type.go:168] "Request Body" body=""
	I1210 07:50:05.163513  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:05.163807  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:05.163852  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:05.662744  412953 type.go:168] "Request Body" body=""
	I1210 07:50:05.662830  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:05.663225  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:06.162951  412953 type.go:168] "Request Body" body=""
	I1210 07:50:06.163056  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:06.163423  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:06.662712  412953 type.go:168] "Request Body" body=""
	I1210 07:50:06.662782  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:06.663083  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:07.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:50:07.162850  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:07.163201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:07.662893  412953 type.go:168] "Request Body" body=""
	I1210 07:50:07.662969  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:07.663309  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:07.663366  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:08.162691  412953 type.go:168] "Request Body" body=""
	I1210 07:50:08.162763  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:08.163063  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:08.662792  412953 type.go:168] "Request Body" body=""
	I1210 07:50:08.662868  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:08.663218  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:09.162919  412953 type.go:168] "Request Body" body=""
	I1210 07:50:09.163000  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:09.163347  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:09.662690  412953 type.go:168] "Request Body" body=""
	I1210 07:50:09.662770  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:09.663051  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:10.162778  412953 type.go:168] "Request Body" body=""
	I1210 07:50:10.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:10.163191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:10.163249  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:10.662950  412953 type.go:168] "Request Body" body=""
	I1210 07:50:10.663039  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:10.663349  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:11.162723  412953 type.go:168] "Request Body" body=""
	I1210 07:50:11.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:11.163072  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:11.662754  412953 type.go:168] "Request Body" body=""
	I1210 07:50:11.662830  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:11.663172  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:12.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:50:12.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:12.163253  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:12.163314  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:12.662721  412953 type.go:168] "Request Body" body=""
	I1210 07:50:12.662806  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:12.663135  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:13.162827  412953 type.go:168] "Request Body" body=""
	I1210 07:50:13.162909  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:13.163270  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:13.662849  412953 type.go:168] "Request Body" body=""
	I1210 07:50:13.662924  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:13.663237  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:14.162709  412953 type.go:168] "Request Body" body=""
	I1210 07:50:14.162777  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:14.163050  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:14.663072  412953 type.go:168] "Request Body" body=""
	I1210 07:50:14.663154  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:14.663480  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:14.663533  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:15.163298  412953 type.go:168] "Request Body" body=""
	I1210 07:50:15.163374  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:15.163686  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:15.662909  412953 node_ready.go:38] duration metric: took 6m0.000357427s for node "functional-314220" to be "Ready" ...
	I1210 07:50:15.669570  412953 out.go:203] 
	W1210 07:50:15.672493  412953 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 07:50:15.672574  412953 out.go:285] * 
	W1210 07:50:15.674736  412953 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:50:15.677520  412953 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.388849086Z" level=info msg="Using the internal default seccomp profile"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.38885711Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.388863068Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.38886886Z" level=info msg="RDT not available in the host system"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.388880865Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.389567258Z" level=info msg="Conmon does support the --sync option"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.389591086Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.389607497Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.390302998Z" level=info msg="Conmon does support the --sync option"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.390324102Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.390445343Z" level=info msg="Updated default CNI network name to "
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.391033462Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.391396799Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.391450141Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.440019086Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.440054435Z" level=info msg="Starting seccomp notifier watcher"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.440099055Z" level=info msg="Create NRI interface"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.440367407Z" level=info msg="built-in NRI default validator is disabled"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.44039327Z" level=info msg="runtime interface created"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.440406103Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.440412708Z" level=info msg="runtime interface starting up..."
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.440423556Z" level=info msg="starting plugins..."
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.440438891Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.440515126Z" level=info msg="No systemd watchdog enabled"
	Dec 10 07:44:13 functional-314220 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:50:17.545511    8552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:50:17.546246    8552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:50:17.547856    8552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:50:17.548451    8552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:50:17.550054    8552 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	[Dec10 07:11] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:23] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:25] overlayfs: idmapped layers are currently not supported
	[  +0.065949] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 07:31] overlayfs: idmapped layers are currently not supported
	[Dec10 07:32] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 07:50:17 up  2:32,  0 user,  load average: 0.33, 0.29, 0.81
	Linux functional-314220 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:50:14 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:50:15 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1136.
	Dec 10 07:50:15 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:15 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:15 functional-314220 kubelet[8441]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:15 functional-314220 kubelet[8441]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:15 functional-314220 kubelet[8441]: E1210 07:50:15.754364    8441 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:50:15 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:50:15 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:50:16 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1137.
	Dec 10 07:50:16 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:16 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:16 functional-314220 kubelet[8446]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:16 functional-314220 kubelet[8446]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:16 functional-314220 kubelet[8446]: E1210 07:50:16.484216    8446 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:50:16 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:50:16 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:50:17 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1138.
	Dec 10 07:50:17 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:17 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:17 functional-314220 kubelet[8468]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:17 functional-314220 kubelet[8468]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:17 functional-314220 kubelet[8468]: E1210 07:50:17.210121    8468 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:50:17 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:50:17 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220: exit status 2 (408.059641ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-314220" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (368.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-314220 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-314220 get po -A: exit status 1 (59.011756ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-314220 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-314220 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-314220 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-314220
helpers_test.go:244: (dbg) docker inspect functional-314220:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	        "Created": "2025-12-10T07:35:53.582567333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 407506,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:35:53.640615938Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hostname",
	        "HostsPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hosts",
	        "LogPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30-json.log",
	        "Name": "/functional-314220",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-314220:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-314220",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	                "LowerDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35-init/diff:/var/lib/docker/overlay2/888a54fd4518421bb2e30bc2f0825f232bffa2f260991e8ba288270662a6554b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-314220",
	                "Source": "/var/lib/docker/volumes/functional-314220/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-314220",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-314220",
	                "name.minikube.sigs.k8s.io": "functional-314220",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a7384df8731cf8cd48ab1dfb3eb79fa2c44cd426e6255cad7345dab2e19d0bfd",
	            "SandboxKey": "/var/run/docker/netns/a7384df8731c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-314220": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:6b:8c:59:cb:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "854df014d6f98ea92f9938b53e3af35a6df7a455941ebb6e97efa575a4d35230",
	                    "EndpointID": "285d153b9d616bc3d729ab764c8a3d3c5b28bb20787fa60a80cbf50869c0c24b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-314220",
	                        "82cb4926192d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220: exit status 2 (290.396764ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-314220 logs -n 25: (1.0509879s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-446865 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ mount          │ -p functional-446865 /tmp/TestFunctionalparallelMountCmdspecific-port2634105139/001:/mount-9p --alsologtostderr -v=1 --port 46464                 │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ ssh            │ functional-446865 ssh findmnt -T /mount-9p | grep 9p                                                                                              │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ ssh            │ functional-446865 ssh -- ls -la /mount-9p                                                                                                         │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ ssh            │ functional-446865 ssh sudo umount -f /mount-9p                                                                                                    │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ mount          │ -p functional-446865 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1456434196/001:/mount1 --alsologtostderr -v=1                                │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ mount          │ -p functional-446865 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1456434196/001:/mount2 --alsologtostderr -v=1                                │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ ssh            │ functional-446865 ssh findmnt -T /mount1                                                                                                          │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ mount          │ -p functional-446865 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1456434196/001:/mount3 --alsologtostderr -v=1                                │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ ssh            │ functional-446865 ssh findmnt -T /mount2                                                                                                          │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ ssh            │ functional-446865 ssh findmnt -T /mount3                                                                                                          │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ mount          │ -p functional-446865 --kill=true                                                                                                                  │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ update-context │ functional-446865 update-context --alsologtostderr -v=2                                                                                           │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ update-context │ functional-446865 update-context --alsologtostderr -v=2                                                                                           │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ update-context │ functional-446865 update-context --alsologtostderr -v=2                                                                                           │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image          │ functional-446865 image ls --format short --alsologtostderr                                                                                       │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image          │ functional-446865 image ls --format yaml --alsologtostderr                                                                                        │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ ssh            │ functional-446865 ssh pgrep buildkitd                                                                                                             │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ image          │ functional-446865 image build -t localhost/my-image:functional-446865 testdata/build --alsologtostderr                                            │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image          │ functional-446865 image ls --format json --alsologtostderr                                                                                        │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image          │ functional-446865 image ls                                                                                                                        │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image          │ functional-446865 image ls --format table --alsologtostderr                                                                                       │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ delete         │ -p functional-446865                                                                                                                              │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ start          │ -p functional-314220 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ start          │ -p functional-314220 --alsologtostderr -v=8                                                                                                       │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:44 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:44:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:44:10.487397  412953 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:44:10.487521  412953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:44:10.487566  412953 out.go:374] Setting ErrFile to fd 2...
	I1210 07:44:10.487572  412953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:44:10.487834  412953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:44:10.488205  412953 out.go:368] Setting JSON to false
	I1210 07:44:10.489052  412953 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8801,"bootTime":1765343850,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:44:10.489127  412953 start.go:143] virtualization:  
	I1210 07:44:10.492628  412953 out.go:179] * [functional-314220] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:44:10.495451  412953 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:44:10.495581  412953 notify.go:221] Checking for updates...
	I1210 07:44:10.501282  412953 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:44:10.504171  412953 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:10.506968  412953 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 07:44:10.509885  412953 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:44:10.512742  412953 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:44:10.516079  412953 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:44:10.516221  412953 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:44:10.539133  412953 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:44:10.539253  412953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:44:10.606789  412953 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:44:10.597593273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:44:10.606896  412953 docker.go:319] overlay module found
	I1210 07:44:10.611915  412953 out.go:179] * Using the docker driver based on existing profile
	I1210 07:44:10.614862  412953 start.go:309] selected driver: docker
	I1210 07:44:10.614885  412953 start.go:927] validating driver "docker" against &{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:44:10.614994  412953 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:44:10.615113  412953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:44:10.673141  412953 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:44:10.664474897 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:44:10.673572  412953 cni.go:84] Creating CNI manager for ""
	I1210 07:44:10.673631  412953 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:44:10.673679  412953 start.go:353] cluster config:
	{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:44:10.678679  412953 out.go:179] * Starting "functional-314220" primary control-plane node in "functional-314220" cluster
	I1210 07:44:10.681372  412953 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 07:44:10.684277  412953 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:44:10.687267  412953 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:44:10.687329  412953 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1210 07:44:10.687343  412953 cache.go:65] Caching tarball of preloaded images
	I1210 07:44:10.687350  412953 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:44:10.687434  412953 preload.go:238] Found /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1210 07:44:10.687444  412953 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 07:44:10.687550  412953 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/config.json ...
	I1210 07:44:10.707132  412953 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:44:10.707156  412953 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:44:10.707176  412953 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:44:10.707214  412953 start.go:360] acquireMachinesLock for functional-314220: {Name:mk24b69a1578b6f8eb2fecd1b5e1ddb99787d8b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:44:10.707283  412953 start.go:364] duration metric: took 45.104µs to acquireMachinesLock for "functional-314220"
	I1210 07:44:10.707306  412953 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:44:10.707317  412953 fix.go:54] fixHost starting: 
	I1210 07:44:10.707577  412953 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:44:10.723920  412953 fix.go:112] recreateIfNeeded on functional-314220: state=Running err=<nil>
	W1210 07:44:10.723951  412953 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:44:10.727176  412953 out.go:252] * Updating the running docker "functional-314220" container ...
	I1210 07:44:10.727205  412953 machine.go:94] provisionDockerMachine start ...
	I1210 07:44:10.727283  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:10.744553  412953 main.go:143] libmachine: Using SSH client type: native
	I1210 07:44:10.744931  412953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:44:10.744946  412953 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:44:10.878742  412953 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-314220
	
	I1210 07:44:10.878763  412953 ubuntu.go:182] provisioning hostname "functional-314220"
	I1210 07:44:10.878828  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:10.897712  412953 main.go:143] libmachine: Using SSH client type: native
	I1210 07:44:10.898057  412953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:44:10.898077  412953 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-314220 && echo "functional-314220" | sudo tee /etc/hostname
	I1210 07:44:11.052065  412953 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-314220
	
	I1210 07:44:11.052160  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:11.072344  412953 main.go:143] libmachine: Using SSH client type: native
	I1210 07:44:11.072686  412953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:44:11.072703  412953 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-314220' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-314220/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-314220' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:44:11.207289  412953 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:44:11.207317  412953 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-376671/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-376671/.minikube}
	I1210 07:44:11.207348  412953 ubuntu.go:190] setting up certificates
	I1210 07:44:11.207366  412953 provision.go:84] configureAuth start
	I1210 07:44:11.207429  412953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:44:11.224935  412953 provision.go:143] copyHostCerts
	I1210 07:44:11.224978  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem
	I1210 07:44:11.225021  412953 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem, removing ...
	I1210 07:44:11.225032  412953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem
	I1210 07:44:11.225107  412953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem (1082 bytes)
	I1210 07:44:11.225201  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem
	I1210 07:44:11.225224  412953 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem, removing ...
	I1210 07:44:11.225234  412953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem
	I1210 07:44:11.225268  412953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem (1123 bytes)
	I1210 07:44:11.225321  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem
	I1210 07:44:11.225345  412953 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem, removing ...
	I1210 07:44:11.225354  412953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem
	I1210 07:44:11.225380  412953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem (1675 bytes)
	I1210 07:44:11.225441  412953 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem org=jenkins.functional-314220 san=[127.0.0.1 192.168.49.2 functional-314220 localhost minikube]
	I1210 07:44:11.417392  412953 provision.go:177] copyRemoteCerts
	I1210 07:44:11.417460  412953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:44:11.417497  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:11.436410  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:11.535532  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 07:44:11.535603  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:44:11.553463  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 07:44:11.553526  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:44:11.571834  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 07:44:11.571892  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:44:11.590409  412953 provision.go:87] duration metric: took 383.016251ms to configureAuth
	I1210 07:44:11.590435  412953 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:44:11.590614  412953 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:44:11.590731  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:11.608257  412953 main.go:143] libmachine: Using SSH client type: native
	I1210 07:44:11.608571  412953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:44:11.608596  412953 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 07:44:11.906129  412953 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 07:44:11.906170  412953 machine.go:97] duration metric: took 1.17895657s to provisionDockerMachine
	I1210 07:44:11.906181  412953 start.go:293] postStartSetup for "functional-314220" (driver="docker")
	I1210 07:44:11.906194  412953 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:44:11.906264  412953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:44:11.906303  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:11.923285  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:12.019543  412953 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:44:12.023176  412953 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 07:44:12.023203  412953 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 07:44:12.023208  412953 command_runner.go:130] > VERSION_ID="12"
	I1210 07:44:12.023217  412953 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 07:44:12.023222  412953 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 07:44:12.023226  412953 command_runner.go:130] > ID=debian
	I1210 07:44:12.023231  412953 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 07:44:12.023236  412953 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 07:44:12.023245  412953 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 07:44:12.023295  412953 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:44:12.023316  412953 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:44:12.023330  412953 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/addons for local assets ...
	I1210 07:44:12.023386  412953 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/files for local assets ...
	I1210 07:44:12.023472  412953 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> 3785282.pem in /etc/ssl/certs
	I1210 07:44:12.023483  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> /etc/ssl/certs/3785282.pem
	I1210 07:44:12.023563  412953 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts -> hosts in /etc/test/nested/copy/378528
	I1210 07:44:12.023571  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts -> /etc/test/nested/copy/378528/hosts
	I1210 07:44:12.023617  412953 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/378528
	I1210 07:44:12.031659  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 07:44:12.049814  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts --> /etc/test/nested/copy/378528/hosts (40 bytes)
	I1210 07:44:12.067644  412953 start.go:296] duration metric: took 161.447867ms for postStartSetup
	I1210 07:44:12.067748  412953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:44:12.067798  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:12.084856  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:12.184547  412953 command_runner.go:130] > 14%
	I1210 07:44:12.184639  412953 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:44:12.189562  412953 command_runner.go:130] > 169G
	I1210 07:44:12.189589  412953 fix.go:56] duration metric: took 1.4822703s for fixHost
	I1210 07:44:12.189600  412953 start.go:83] releasing machines lock for "functional-314220", held for 1.482305303s
	I1210 07:44:12.189668  412953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:44:12.206193  412953 ssh_runner.go:195] Run: cat /version.json
	I1210 07:44:12.206242  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:12.206484  412953 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:44:12.206547  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:12.229509  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:12.231766  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:12.322395  412953 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765319469-22089", "minikube_version": "v1.37.0", "commit": "3b564f551de69272c9de22efc5b37f8a5b0156c7"}
	I1210 07:44:12.322525  412953 ssh_runner.go:195] Run: systemctl --version
	I1210 07:44:12.409743  412953 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1210 07:44:12.412779  412953 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 07:44:12.412818  412953 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 07:44:12.412894  412953 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 07:44:12.460937  412953 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 07:44:12.466609  412953 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 07:44:12.466697  412953 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:44:12.466802  412953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:44:12.474626  412953 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:44:12.474651  412953 start.go:496] detecting cgroup driver to use...
	I1210 07:44:12.474708  412953 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:44:12.474780  412953 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:44:12.490092  412953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:44:12.503562  412953 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:44:12.503627  412953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:44:12.518840  412953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:44:12.531838  412953 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:44:12.642559  412953 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:44:12.762873  412953 docker.go:234] disabling docker service ...
	I1210 07:44:12.762979  412953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:44:12.778725  412953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:44:12.791652  412953 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:44:12.911705  412953 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:44:13.035394  412953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:44:13.049695  412953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:44:13.065431  412953 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1210 07:44:13.065522  412953 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 07:44:13.065609  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.075381  412953 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 07:44:13.075482  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.085452  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.094855  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.104471  412953 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:44:13.112786  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.121728  412953 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.130205  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.139248  412953 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:44:13.145900  412953 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 07:44:13.147163  412953 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:44:13.154995  412953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:44:13.289205  412953 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 07:44:13.445871  412953 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 07:44:13.446002  412953 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 07:44:13.449677  412953 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1210 07:44:13.449750  412953 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 07:44:13.449774  412953 command_runner.go:130] > Device: 0,72	Inode: 1639        Links: 1
	I1210 07:44:13.449787  412953 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 07:44:13.449793  412953 command_runner.go:130] > Access: 2025-12-10 07:44:13.388138306 +0000
	I1210 07:44:13.449816  412953 command_runner.go:130] > Modify: 2025-12-10 07:44:13.388138306 +0000
	I1210 07:44:13.449826  412953 command_runner.go:130] > Change: 2025-12-10 07:44:13.388138306 +0000
	I1210 07:44:13.449830  412953 command_runner.go:130] >  Birth: -
	I1210 07:44:13.449864  412953 start.go:564] Will wait 60s for crictl version
	I1210 07:44:13.449928  412953 ssh_runner.go:195] Run: which crictl
	I1210 07:44:13.453538  412953 command_runner.go:130] > /usr/local/bin/crictl
	I1210 07:44:13.453678  412953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:44:13.477475  412953 command_runner.go:130] > Version:  0.1.0
	I1210 07:44:13.477498  412953 command_runner.go:130] > RuntimeName:  cri-o
	I1210 07:44:13.477503  412953 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1210 07:44:13.477509  412953 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 07:44:13.477520  412953 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 07:44:13.477602  412953 ssh_runner.go:195] Run: crio --version
	I1210 07:44:13.505751  412953 command_runner.go:130] > crio version 1.34.3
	I1210 07:44:13.505796  412953 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1210 07:44:13.505803  412953 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1210 07:44:13.505808  412953 command_runner.go:130] >    GitTreeState:   dirty
	I1210 07:44:13.505813  412953 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1210 07:44:13.505817  412953 command_runner.go:130] >    GoVersion:      go1.24.6
	I1210 07:44:13.505821  412953 command_runner.go:130] >    Compiler:       gc
	I1210 07:44:13.505826  412953 command_runner.go:130] >    Platform:       linux/arm64
	I1210 07:44:13.505835  412953 command_runner.go:130] >    Linkmode:       static
	I1210 07:44:13.505838  412953 command_runner.go:130] >    BuildTags:
	I1210 07:44:13.505844  412953 command_runner.go:130] >      static
	I1210 07:44:13.505848  412953 command_runner.go:130] >      netgo
	I1210 07:44:13.505852  412953 command_runner.go:130] >      osusergo
	I1210 07:44:13.505859  412953 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1210 07:44:13.505863  412953 command_runner.go:130] >      seccomp
	I1210 07:44:13.505874  412953 command_runner.go:130] >      apparmor
	I1210 07:44:13.505877  412953 command_runner.go:130] >      selinux
	I1210 07:44:13.505881  412953 command_runner.go:130] >    LDFlags:          unknown
	I1210 07:44:13.505886  412953 command_runner.go:130] >    SeccompEnabled:   true
	I1210 07:44:13.505895  412953 command_runner.go:130] >    AppArmorEnabled:  false
	I1210 07:44:13.507701  412953 ssh_runner.go:195] Run: crio --version
	I1210 07:44:13.535170  412953 command_runner.go:130] > crio version 1.34.3
	I1210 07:44:13.535233  412953 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1210 07:44:13.535254  412953 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1210 07:44:13.535275  412953 command_runner.go:130] >    GitTreeState:   dirty
	I1210 07:44:13.535296  412953 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1210 07:44:13.535314  412953 command_runner.go:130] >    GoVersion:      go1.24.6
	I1210 07:44:13.535334  412953 command_runner.go:130] >    Compiler:       gc
	I1210 07:44:13.535358  412953 command_runner.go:130] >    Platform:       linux/arm64
	I1210 07:44:13.535377  412953 command_runner.go:130] >    Linkmode:       static
	I1210 07:44:13.535395  412953 command_runner.go:130] >    BuildTags:
	I1210 07:44:13.535414  412953 command_runner.go:130] >      static
	I1210 07:44:13.535432  412953 command_runner.go:130] >      netgo
	I1210 07:44:13.535451  412953 command_runner.go:130] >      osusergo
	I1210 07:44:13.535471  412953 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1210 07:44:13.535489  412953 command_runner.go:130] >      seccomp
	I1210 07:44:13.535518  412953 command_runner.go:130] >      apparmor
	I1210 07:44:13.535548  412953 command_runner.go:130] >      selinux
	I1210 07:44:13.535566  412953 command_runner.go:130] >    LDFlags:          unknown
	I1210 07:44:13.535590  412953 command_runner.go:130] >    SeccompEnabled:   true
	I1210 07:44:13.535609  412953 command_runner.go:130] >    AppArmorEnabled:  false
	I1210 07:44:13.540516  412953 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 07:44:13.543340  412953 cli_runner.go:164] Run: docker network inspect functional-314220 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:44:13.558881  412953 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 07:44:13.562785  412953 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1210 07:44:13.562964  412953 kubeadm.go:884] updating cluster {Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:44:13.563103  412953 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:44:13.563170  412953 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:44:13.592036  412953 command_runner.go:130] > {
	I1210 07:44:13.592059  412953 command_runner.go:130] >   "images":  [
	I1210 07:44:13.592064  412953 command_runner.go:130] >     {
	I1210 07:44:13.592073  412953 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1210 07:44:13.592083  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592089  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1210 07:44:13.592093  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592096  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592118  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1210 07:44:13.592130  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1210 07:44:13.592138  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592144  412953 command_runner.go:130] >       "size":  "111333938",
	I1210 07:44:13.592154  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592159  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592163  412953 command_runner.go:130] >     },
	I1210 07:44:13.592169  412953 command_runner.go:130] >     {
	I1210 07:44:13.592176  412953 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1210 07:44:13.592183  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592189  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 07:44:13.592192  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592196  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592207  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1210 07:44:13.592217  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1210 07:44:13.592221  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592225  412953 command_runner.go:130] >       "size":  "29037500",
	I1210 07:44:13.592231  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592239  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592246  412953 command_runner.go:130] >     },
	I1210 07:44:13.592249  412953 command_runner.go:130] >     {
	I1210 07:44:13.592255  412953 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 07:44:13.592264  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592269  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 07:44:13.592272  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592278  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592286  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1210 07:44:13.592297  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1210 07:44:13.592300  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592306  412953 command_runner.go:130] >       "size":  "74491780",
	I1210 07:44:13.592311  412953 command_runner.go:130] >       "username":  "nonroot",
	I1210 07:44:13.592317  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592320  412953 command_runner.go:130] >     },
	I1210 07:44:13.592329  412953 command_runner.go:130] >     {
	I1210 07:44:13.592338  412953 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1210 07:44:13.592342  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592354  412953 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1210 07:44:13.592357  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592361  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592374  412953 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1210 07:44:13.592381  412953 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1210 07:44:13.592387  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592391  412953 command_runner.go:130] >       "size":  "60857170",
	I1210 07:44:13.592395  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592401  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.592405  412953 command_runner.go:130] >       },
	I1210 07:44:13.592420  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592424  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592429  412953 command_runner.go:130] >     },
	I1210 07:44:13.592433  412953 command_runner.go:130] >     {
	I1210 07:44:13.592446  412953 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1210 07:44:13.592450  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592457  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1210 07:44:13.592461  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592465  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592474  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1210 07:44:13.592484  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1210 07:44:13.592488  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592494  412953 command_runner.go:130] >       "size":  "84949999",
	I1210 07:44:13.592498  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592522  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.592525  412953 command_runner.go:130] >       },
	I1210 07:44:13.592530  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592538  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592541  412953 command_runner.go:130] >     },
	I1210 07:44:13.592545  412953 command_runner.go:130] >     {
	I1210 07:44:13.592556  412953 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1210 07:44:13.592563  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592569  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1210 07:44:13.592579  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592582  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592591  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1210 07:44:13.592602  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1210 07:44:13.592606  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592616  412953 command_runner.go:130] >       "size":  "72170325",
	I1210 07:44:13.592619  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592623  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.592628  412953 command_runner.go:130] >       },
	I1210 07:44:13.592633  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592639  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592642  412953 command_runner.go:130] >     },
	I1210 07:44:13.592645  412953 command_runner.go:130] >     {
	I1210 07:44:13.592652  412953 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1210 07:44:13.592663  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592669  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1210 07:44:13.592674  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592678  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592691  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1210 07:44:13.592702  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1210 07:44:13.592706  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592712  412953 command_runner.go:130] >       "size":  "74106775",
	I1210 07:44:13.592717  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592723  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592726  412953 command_runner.go:130] >     },
	I1210 07:44:13.592729  412953 command_runner.go:130] >     {
	I1210 07:44:13.592735  412953 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1210 07:44:13.592741  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592747  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1210 07:44:13.592750  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592764  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592772  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1210 07:44:13.592793  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1210 07:44:13.592800  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592804  412953 command_runner.go:130] >       "size":  "49822549",
	I1210 07:44:13.592808  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592817  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.592820  412953 command_runner.go:130] >       },
	I1210 07:44:13.592824  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592830  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592834  412953 command_runner.go:130] >     },
	I1210 07:44:13.592843  412953 command_runner.go:130] >     {
	I1210 07:44:13.592849  412953 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 07:44:13.592853  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592858  412953 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 07:44:13.592866  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592870  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592878  412953 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1210 07:44:13.592888  412953 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1210 07:44:13.592892  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592898  412953 command_runner.go:130] >       "size":  "519884",
	I1210 07:44:13.592902  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592911  412953 command_runner.go:130] >         "value":  "65535"
	I1210 07:44:13.592914  412953 command_runner.go:130] >       },
	I1210 07:44:13.592918  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592924  412953 command_runner.go:130] >       "pinned":  true
	I1210 07:44:13.592927  412953 command_runner.go:130] >     }
	I1210 07:44:13.592932  412953 command_runner.go:130] >   ]
	I1210 07:44:13.592935  412953 command_runner.go:130] > }
	I1210 07:44:13.595219  412953 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:44:13.595245  412953 crio.go:433] Images already preloaded, skipping extraction
	I1210 07:44:13.595305  412953 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:44:13.620833  412953 command_runner.go:130] > {
	I1210 07:44:13.620851  412953 command_runner.go:130] >   "images":  [
	I1210 07:44:13.620856  412953 command_runner.go:130] >     {
	I1210 07:44:13.620865  412953 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1210 07:44:13.620870  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.620884  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1210 07:44:13.620888  412953 command_runner.go:130] >       ],
	I1210 07:44:13.620896  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.620905  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1210 07:44:13.620913  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1210 07:44:13.620917  412953 command_runner.go:130] >       ],
	I1210 07:44:13.620921  412953 command_runner.go:130] >       "size":  "111333938",
	I1210 07:44:13.620925  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.620930  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.620933  412953 command_runner.go:130] >     },
	I1210 07:44:13.620936  412953 command_runner.go:130] >     {
	I1210 07:44:13.620943  412953 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1210 07:44:13.620947  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.620952  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 07:44:13.620955  412953 command_runner.go:130] >       ],
	I1210 07:44:13.620958  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.620966  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1210 07:44:13.620975  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1210 07:44:13.620978  412953 command_runner.go:130] >       ],
	I1210 07:44:13.620982  412953 command_runner.go:130] >       "size":  "29037500",
	I1210 07:44:13.620985  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.620991  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.620994  412953 command_runner.go:130] >     },
	I1210 07:44:13.620997  412953 command_runner.go:130] >     {
	I1210 07:44:13.621003  412953 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 07:44:13.621007  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621012  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 07:44:13.621015  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621019  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621027  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1210 07:44:13.621035  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1210 07:44:13.621038  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621042  412953 command_runner.go:130] >       "size":  "74491780",
	I1210 07:44:13.621046  412953 command_runner.go:130] >       "username":  "nonroot",
	I1210 07:44:13.621049  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621056  412953 command_runner.go:130] >     },
	I1210 07:44:13.621059  412953 command_runner.go:130] >     {
	I1210 07:44:13.621066  412953 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1210 07:44:13.621070  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621075  412953 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1210 07:44:13.621079  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621083  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621091  412953 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1210 07:44:13.621098  412953 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1210 07:44:13.621102  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621105  412953 command_runner.go:130] >       "size":  "60857170",
	I1210 07:44:13.621109  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621113  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.621116  412953 command_runner.go:130] >       },
	I1210 07:44:13.621124  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621128  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621131  412953 command_runner.go:130] >     },
	I1210 07:44:13.621134  412953 command_runner.go:130] >     {
	I1210 07:44:13.621143  412953 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1210 07:44:13.621147  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621152  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1210 07:44:13.621156  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621159  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621167  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1210 07:44:13.621175  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1210 07:44:13.621178  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621182  412953 command_runner.go:130] >       "size":  "84949999",
	I1210 07:44:13.621185  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621189  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.621192  412953 command_runner.go:130] >       },
	I1210 07:44:13.621196  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621199  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621202  412953 command_runner.go:130] >     },
	I1210 07:44:13.621208  412953 command_runner.go:130] >     {
	I1210 07:44:13.621214  412953 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1210 07:44:13.621218  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621224  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1210 07:44:13.621227  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621231  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621239  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1210 07:44:13.621247  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1210 07:44:13.621250  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621255  412953 command_runner.go:130] >       "size":  "72170325",
	I1210 07:44:13.621258  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621262  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.621265  412953 command_runner.go:130] >       },
	I1210 07:44:13.621268  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621272  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621275  412953 command_runner.go:130] >     },
	I1210 07:44:13.621278  412953 command_runner.go:130] >     {
	I1210 07:44:13.621285  412953 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1210 07:44:13.621289  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621294  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1210 07:44:13.621297  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621301  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621309  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1210 07:44:13.621317  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1210 07:44:13.621320  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621324  412953 command_runner.go:130] >       "size":  "74106775",
	I1210 07:44:13.621327  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621331  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621334  412953 command_runner.go:130] >     },
	I1210 07:44:13.621337  412953 command_runner.go:130] >     {
	I1210 07:44:13.621343  412953 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1210 07:44:13.621347  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621352  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1210 07:44:13.621359  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621363  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621371  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1210 07:44:13.621390  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1210 07:44:13.621393  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621397  412953 command_runner.go:130] >       "size":  "49822549",
	I1210 07:44:13.621401  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621404  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.621408  412953 command_runner.go:130] >       },
	I1210 07:44:13.621411  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621415  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621418  412953 command_runner.go:130] >     },
	I1210 07:44:13.621421  412953 command_runner.go:130] >     {
	I1210 07:44:13.621427  412953 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 07:44:13.621431  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621435  412953 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 07:44:13.621438  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621442  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621449  412953 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1210 07:44:13.621456  412953 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1210 07:44:13.621459  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621463  412953 command_runner.go:130] >       "size":  "519884",
	I1210 07:44:13.621466  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621470  412953 command_runner.go:130] >         "value":  "65535"
	I1210 07:44:13.621473  412953 command_runner.go:130] >       },
	I1210 07:44:13.621477  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621481  412953 command_runner.go:130] >       "pinned":  true
	I1210 07:44:13.621483  412953 command_runner.go:130] >     }
	I1210 07:44:13.621486  412953 command_runner.go:130] >   ]
	I1210 07:44:13.621490  412953 command_runner.go:130] > }
	I1210 07:44:13.622855  412953 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:44:13.622877  412953 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:44:13.622884  412953 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1210 07:44:13.622995  412953 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-314220 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:44:13.623104  412953 ssh_runner.go:195] Run: crio config
	I1210 07:44:13.670610  412953 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1210 07:44:13.670640  412953 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1210 07:44:13.670648  412953 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1210 07:44:13.670652  412953 command_runner.go:130] > #
	I1210 07:44:13.670659  412953 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1210 07:44:13.670667  412953 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1210 07:44:13.670674  412953 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1210 07:44:13.670691  412953 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1210 07:44:13.670699  412953 command_runner.go:130] > # reload'.
	I1210 07:44:13.670706  412953 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1210 07:44:13.670713  412953 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1210 07:44:13.670722  412953 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1210 07:44:13.670728  412953 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1210 07:44:13.670733  412953 command_runner.go:130] > [crio]
	I1210 07:44:13.670747  412953 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1210 07:44:13.670755  412953 command_runner.go:130] > # containers images, in this directory.
	I1210 07:44:13.670764  412953 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1210 07:44:13.670774  412953 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1210 07:44:13.670784  412953 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1210 07:44:13.670792  412953 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1210 07:44:13.670799  412953 command_runner.go:130] > # imagestore = ""
	I1210 07:44:13.670805  412953 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1210 07:44:13.670812  412953 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1210 07:44:13.670819  412953 command_runner.go:130] > # storage_driver = "overlay"
	I1210 07:44:13.670826  412953 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1210 07:44:13.670832  412953 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1210 07:44:13.670839  412953 command_runner.go:130] > # storage_option = [
	I1210 07:44:13.670842  412953 command_runner.go:130] > # ]
	I1210 07:44:13.670848  412953 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1210 07:44:13.670854  412953 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1210 07:44:13.670864  412953 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1210 07:44:13.670876  412953 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1210 07:44:13.670886  412953 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1210 07:44:13.670890  412953 command_runner.go:130] > # always happen on a node reboot
	I1210 07:44:13.670897  412953 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1210 07:44:13.670908  412953 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1210 07:44:13.670916  412953 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1210 07:44:13.670921  412953 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1210 07:44:13.670927  412953 command_runner.go:130] > # version_file_persist = ""
	I1210 07:44:13.670948  412953 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1210 07:44:13.670957  412953 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1210 07:44:13.670965  412953 command_runner.go:130] > # internal_wipe = true
	I1210 07:44:13.670973  412953 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1210 07:44:13.670982  412953 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1210 07:44:13.670986  412953 command_runner.go:130] > # internal_repair = true
	I1210 07:44:13.670992  412953 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1210 07:44:13.671000  412953 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1210 07:44:13.671005  412953 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1210 07:44:13.671033  412953 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1210 07:44:13.671041  412953 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1210 07:44:13.671047  412953 command_runner.go:130] > [crio.api]
	I1210 07:44:13.671052  412953 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1210 07:44:13.671057  412953 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1210 07:44:13.671064  412953 command_runner.go:130] > # IP address on which the stream server will listen.
	I1210 07:44:13.671297  412953 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1210 07:44:13.671315  412953 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1210 07:44:13.671322  412953 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1210 07:44:13.671326  412953 command_runner.go:130] > # stream_port = "0"
	I1210 07:44:13.671356  412953 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1210 07:44:13.671366  412953 command_runner.go:130] > # stream_enable_tls = false
	I1210 07:44:13.671373  412953 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1210 07:44:13.671558  412953 command_runner.go:130] > # stream_idle_timeout = ""
	I1210 07:44:13.671575  412953 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1210 07:44:13.671582  412953 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1210 07:44:13.671587  412953 command_runner.go:130] > # stream_tls_cert = ""
	I1210 07:44:13.671593  412953 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1210 07:44:13.671617  412953 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1210 07:44:13.671819  412953 command_runner.go:130] > # stream_tls_key = ""
	I1210 07:44:13.671835  412953 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1210 07:44:13.671853  412953 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1210 07:44:13.671864  412953 command_runner.go:130] > # automatically pick up the changes.
	I1210 07:44:13.671868  412953 command_runner.go:130] > # stream_tls_ca = ""
	I1210 07:44:13.671887  412953 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 07:44:13.671896  412953 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1210 07:44:13.671903  412953 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 07:44:13.672102  412953 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1210 07:44:13.672121  412953 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1210 07:44:13.672128  412953 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1210 07:44:13.672131  412953 command_runner.go:130] > [crio.runtime]
	I1210 07:44:13.672137  412953 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1210 07:44:13.672162  412953 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1210 07:44:13.672172  412953 command_runner.go:130] > # "nofile=1024:2048"
	I1210 07:44:13.672179  412953 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1210 07:44:13.672183  412953 command_runner.go:130] > # default_ulimits = [
	I1210 07:44:13.672188  412953 command_runner.go:130] > # ]
	I1210 07:44:13.672195  412953 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1210 07:44:13.672201  412953 command_runner.go:130] > # no_pivot = false
	I1210 07:44:13.672207  412953 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1210 07:44:13.672214  412953 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1210 07:44:13.672219  412953 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1210 07:44:13.672235  412953 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1210 07:44:13.672241  412953 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1210 07:44:13.672248  412953 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 07:44:13.672445  412953 command_runner.go:130] > # conmon = ""
	I1210 07:44:13.672461  412953 command_runner.go:130] > # Cgroup setting for conmon
	I1210 07:44:13.672469  412953 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1210 07:44:13.672473  412953 command_runner.go:130] > conmon_cgroup = "pod"
	I1210 07:44:13.672480  412953 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1210 07:44:13.672502  412953 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1210 07:44:13.672522  412953 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 07:44:13.672875  412953 command_runner.go:130] > # conmon_env = [
	I1210 07:44:13.672888  412953 command_runner.go:130] > # ]
	I1210 07:44:13.672895  412953 command_runner.go:130] > # Additional environment variables to set for all the
	I1210 07:44:13.672900  412953 command_runner.go:130] > # containers. These are overridden if set in the
	I1210 07:44:13.672907  412953 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1210 07:44:13.673114  412953 command_runner.go:130] > # default_env = [
	I1210 07:44:13.673128  412953 command_runner.go:130] > # ]
	I1210 07:44:13.673149  412953 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1210 07:44:13.673177  412953 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1210 07:44:13.673192  412953 command_runner.go:130] > # selinux = false
	I1210 07:44:13.673200  412953 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1210 07:44:13.673211  412953 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1210 07:44:13.673216  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.673222  412953 command_runner.go:130] > # seccomp_profile = ""
	I1210 07:44:13.673228  412953 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1210 07:44:13.673240  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.673428  412953 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1210 07:44:13.673444  412953 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1210 07:44:13.673452  412953 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1210 07:44:13.673459  412953 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1210 07:44:13.673478  412953 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1210 07:44:13.673488  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.673492  412953 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1210 07:44:13.673498  412953 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1210 07:44:13.673505  412953 command_runner.go:130] > # the cgroup blockio controller.
	I1210 07:44:13.673509  412953 command_runner.go:130] > # blockio_config_file = ""
	I1210 07:44:13.673515  412953 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1210 07:44:13.673522  412953 command_runner.go:130] > # blockio parameters.
	I1210 07:44:13.673725  412953 command_runner.go:130] > # blockio_reload = false
	I1210 07:44:13.673738  412953 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1210 07:44:13.673742  412953 command_runner.go:130] > # irqbalance daemon.
	I1210 07:44:13.673748  412953 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1210 07:44:13.673757  412953 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1210 07:44:13.673788  412953 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1210 07:44:13.673801  412953 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1210 07:44:13.673807  412953 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1210 07:44:13.673816  412953 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1210 07:44:13.673821  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.673830  412953 command_runner.go:130] > # rdt_config_file = ""
	I1210 07:44:13.673837  412953 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1210 07:44:13.674053  412953 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1210 07:44:13.674071  412953 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1210 07:44:13.674076  412953 command_runner.go:130] > # separate_pull_cgroup = ""
	I1210 07:44:13.674083  412953 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1210 07:44:13.674102  412953 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1210 07:44:13.674116  412953 command_runner.go:130] > # will be added.
	I1210 07:44:13.674121  412953 command_runner.go:130] > # default_capabilities = [
	I1210 07:44:13.674343  412953 command_runner.go:130] > # 	"CHOWN",
	I1210 07:44:13.674352  412953 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1210 07:44:13.674356  412953 command_runner.go:130] > # 	"FSETID",
	I1210 07:44:13.674359  412953 command_runner.go:130] > # 	"FOWNER",
	I1210 07:44:13.674363  412953 command_runner.go:130] > # 	"SETGID",
	I1210 07:44:13.674366  412953 command_runner.go:130] > # 	"SETUID",
	I1210 07:44:13.674423  412953 command_runner.go:130] > # 	"SETPCAP",
	I1210 07:44:13.674435  412953 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1210 07:44:13.674593  412953 command_runner.go:130] > # 	"KILL",
	I1210 07:44:13.674604  412953 command_runner.go:130] > # ]
	I1210 07:44:13.674621  412953 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1210 07:44:13.674632  412953 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1210 07:44:13.674812  412953 command_runner.go:130] > # add_inheritable_capabilities = false
	I1210 07:44:13.674829  412953 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1210 07:44:13.674836  412953 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 07:44:13.674844  412953 command_runner.go:130] > default_sysctls = [
	I1210 07:44:13.674849  412953 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1210 07:44:13.674855  412953 command_runner.go:130] > ]
	I1210 07:44:13.674860  412953 command_runner.go:130] > # List of devices on the host that a
	I1210 07:44:13.674883  412953 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1210 07:44:13.674902  412953 command_runner.go:130] > # allowed_devices = [
	I1210 07:44:13.675282  412953 command_runner.go:130] > # 	"/dev/fuse",
	I1210 07:44:13.675296  412953 command_runner.go:130] > # 	"/dev/net/tun",
	I1210 07:44:13.675300  412953 command_runner.go:130] > # ]
	I1210 07:44:13.675305  412953 command_runner.go:130] > # List of additional devices. specified as
	I1210 07:44:13.675313  412953 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1210 07:44:13.675339  412953 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1210 07:44:13.675346  412953 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 07:44:13.675350  412953 command_runner.go:130] > # additional_devices = [
	I1210 07:44:13.675524  412953 command_runner.go:130] > # ]
	I1210 07:44:13.675539  412953 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1210 07:44:13.675543  412953 command_runner.go:130] > # cdi_spec_dirs = [
	I1210 07:44:13.675549  412953 command_runner.go:130] > # 	"/etc/cdi",
	I1210 07:44:13.675552  412953 command_runner.go:130] > # 	"/var/run/cdi",
	I1210 07:44:13.675555  412953 command_runner.go:130] > # ]
	I1210 07:44:13.675562  412953 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1210 07:44:13.675584  412953 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1210 07:44:13.675594  412953 command_runner.go:130] > # Defaults to false.
	I1210 07:44:13.675951  412953 command_runner.go:130] > # device_ownership_from_security_context = false
	I1210 07:44:13.675970  412953 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1210 07:44:13.675978  412953 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1210 07:44:13.675982  412953 command_runner.go:130] > # hooks_dir = [
	I1210 07:44:13.676213  412953 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1210 07:44:13.676224  412953 command_runner.go:130] > # ]
	I1210 07:44:13.676231  412953 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1210 07:44:13.676237  412953 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1210 07:44:13.676246  412953 command_runner.go:130] > # its default mounts from the following two files:
	I1210 07:44:13.676261  412953 command_runner.go:130] > #
	I1210 07:44:13.676273  412953 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1210 07:44:13.676280  412953 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1210 07:44:13.676286  412953 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1210 07:44:13.676291  412953 command_runner.go:130] > #
	I1210 07:44:13.676298  412953 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1210 07:44:13.676304  412953 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1210 07:44:13.676313  412953 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1210 07:44:13.676318  412953 command_runner.go:130] > #      only add mounts it finds in this file.
	I1210 07:44:13.676321  412953 command_runner.go:130] > #
	I1210 07:44:13.676325  412953 command_runner.go:130] > # default_mounts_file = ""
	I1210 07:44:13.676345  412953 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1210 07:44:13.676358  412953 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1210 07:44:13.676363  412953 command_runner.go:130] > # pids_limit = -1
	I1210 07:44:13.676375  412953 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1210 07:44:13.676381  412953 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1210 07:44:13.676391  412953 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1210 07:44:13.676400  412953 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1210 07:44:13.676412  412953 command_runner.go:130] > # log_size_max = -1
	I1210 07:44:13.676423  412953 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1210 07:44:13.676626  412953 command_runner.go:130] > # log_to_journald = false
	I1210 07:44:13.676643  412953 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1210 07:44:13.676650  412953 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1210 07:44:13.676677  412953 command_runner.go:130] > # Path to directory for container attach sockets.
	I1210 07:44:13.676879  412953 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1210 07:44:13.676891  412953 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1210 07:44:13.676896  412953 command_runner.go:130] > # bind_mount_prefix = ""
	I1210 07:44:13.676903  412953 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1210 07:44:13.676909  412953 command_runner.go:130] > # read_only = false
	I1210 07:44:13.676916  412953 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1210 07:44:13.676942  412953 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1210 07:44:13.676953  412953 command_runner.go:130] > # live configuration reload.
	I1210 07:44:13.676956  412953 command_runner.go:130] > # log_level = "info"
	I1210 07:44:13.676967  412953 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1210 07:44:13.676977  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.677149  412953 command_runner.go:130] > # log_filter = ""
	I1210 07:44:13.677166  412953 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1210 07:44:13.677173  412953 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1210 07:44:13.677177  412953 command_runner.go:130] > # separated by comma.
	I1210 07:44:13.677186  412953 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 07:44:13.677212  412953 command_runner.go:130] > # uid_mappings = ""
	I1210 07:44:13.677225  412953 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1210 07:44:13.677231  412953 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1210 07:44:13.677238  412953 command_runner.go:130] > # separated by comma.
	I1210 07:44:13.677246  412953 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 07:44:13.677420  412953 command_runner.go:130] > # gid_mappings = ""
	I1210 07:44:13.677432  412953 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1210 07:44:13.677439  412953 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 07:44:13.677446  412953 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 07:44:13.677455  412953 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 07:44:13.677480  412953 command_runner.go:130] > # minimum_mappable_uid = -1
	I1210 07:44:13.677493  412953 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1210 07:44:13.677500  412953 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 07:44:13.677512  412953 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 07:44:13.677522  412953 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 07:44:13.677681  412953 command_runner.go:130] > # minimum_mappable_gid = -1
	I1210 07:44:13.677697  412953 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1210 07:44:13.677705  412953 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1210 07:44:13.677711  412953 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1210 07:44:13.677936  412953 command_runner.go:130] > # ctr_stop_timeout = 30
	I1210 07:44:13.677953  412953 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1210 07:44:13.677960  412953 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1210 07:44:13.677965  412953 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1210 07:44:13.677970  412953 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1210 07:44:13.677991  412953 command_runner.go:130] > # drop_infra_ctr = true
	I1210 07:44:13.678004  412953 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1210 07:44:13.678011  412953 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1210 07:44:13.678020  412953 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1210 07:44:13.678031  412953 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1210 07:44:13.678039  412953 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1210 07:44:13.678048  412953 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1210 07:44:13.678054  412953 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1210 07:44:13.678068  412953 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1210 07:44:13.678282  412953 command_runner.go:130] > # shared_cpuset = ""
	I1210 07:44:13.678299  412953 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1210 07:44:13.678306  412953 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1210 07:44:13.678310  412953 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1210 07:44:13.678328  412953 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1210 07:44:13.678337  412953 command_runner.go:130] > # pinns_path = ""
	I1210 07:44:13.678343  412953 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1210 07:44:13.678349  412953 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1210 07:44:13.678540  412953 command_runner.go:130] > # enable_criu_support = true
	I1210 07:44:13.678551  412953 command_runner.go:130] > # Enable/disable the generation of the container,
	I1210 07:44:13.678558  412953 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1210 07:44:13.678563  412953 command_runner.go:130] > # enable_pod_events = false
	I1210 07:44:13.678572  412953 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1210 07:44:13.678599  412953 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1210 07:44:13.678604  412953 command_runner.go:130] > # default_runtime = "crun"
	I1210 07:44:13.678609  412953 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1210 07:44:13.678622  412953 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1210 07:44:13.678632  412953 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1210 07:44:13.678642  412953 command_runner.go:130] > # creation as a file is not desired either.
	I1210 07:44:13.678651  412953 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1210 07:44:13.678663  412953 command_runner.go:130] > # the hostname is being managed dynamically.
	I1210 07:44:13.678672  412953 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1210 07:44:13.678923  412953 command_runner.go:130] > # ]
	I1210 07:44:13.678950  412953 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1210 07:44:13.678958  412953 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1210 07:44:13.678972  412953 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1210 07:44:13.678982  412953 command_runner.go:130] > # Each entry in the table should follow the format:
	I1210 07:44:13.678985  412953 command_runner.go:130] > #
	I1210 07:44:13.678990  412953 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1210 07:44:13.678995  412953 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1210 07:44:13.679001  412953 command_runner.go:130] > # runtime_type = "oci"
	I1210 07:44:13.679006  412953 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1210 07:44:13.679035  412953 command_runner.go:130] > # inherit_default_runtime = false
	I1210 07:44:13.679045  412953 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1210 07:44:13.679050  412953 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1210 07:44:13.679054  412953 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1210 07:44:13.679060  412953 command_runner.go:130] > # monitor_env = []
	I1210 07:44:13.679065  412953 command_runner.go:130] > # privileged_without_host_devices = false
	I1210 07:44:13.679069  412953 command_runner.go:130] > # allowed_annotations = []
	I1210 07:44:13.679076  412953 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1210 07:44:13.679085  412953 command_runner.go:130] > # no_sync_log = false
	I1210 07:44:13.679101  412953 command_runner.go:130] > # default_annotations = {}
	I1210 07:44:13.679107  412953 command_runner.go:130] > # stream_websockets = false
	I1210 07:44:13.679111  412953 command_runner.go:130] > # seccomp_profile = ""
	I1210 07:44:13.679142  412953 command_runner.go:130] > # Where:
	I1210 07:44:13.679152  412953 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1210 07:44:13.679158  412953 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1210 07:44:13.679174  412953 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1210 07:44:13.679188  412953 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1210 07:44:13.679194  412953 command_runner.go:130] > #   in $PATH.
	I1210 07:44:13.679200  412953 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1210 07:44:13.679207  412953 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1210 07:44:13.679213  412953 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1210 07:44:13.679219  412953 command_runner.go:130] > #   state.
	I1210 07:44:13.679225  412953 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1210 07:44:13.679231  412953 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1210 07:44:13.679240  412953 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1210 07:44:13.679252  412953 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1210 07:44:13.679260  412953 command_runner.go:130] > #   the values from the default runtime on load time.
	I1210 07:44:13.679267  412953 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1210 07:44:13.679274  412953 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1210 07:44:13.679281  412953 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1210 07:44:13.679291  412953 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1210 07:44:13.679296  412953 command_runner.go:130] > #   The currently recognized values are:
	I1210 07:44:13.679302  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1210 07:44:13.679311  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1210 07:44:13.679325  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1210 07:44:13.679338  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1210 07:44:13.679345  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1210 07:44:13.679357  412953 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1210 07:44:13.679365  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1210 07:44:13.679374  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1210 07:44:13.679380  412953 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1210 07:44:13.679398  412953 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1210 07:44:13.679409  412953 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1210 07:44:13.679420  412953 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1210 07:44:13.679430  412953 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1210 07:44:13.679436  412953 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1210 07:44:13.679445  412953 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1210 07:44:13.679452  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1210 07:44:13.679461  412953 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1210 07:44:13.679464  412953 command_runner.go:130] > #   deprecated option "conmon".
	I1210 07:44:13.679478  412953 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1210 07:44:13.679487  412953 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1210 07:44:13.679493  412953 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1210 07:44:13.679503  412953 command_runner.go:130] > #   should be moved to the container's cgroup
	I1210 07:44:13.679511  412953 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1210 07:44:13.679518  412953 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1210 07:44:13.679525  412953 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1210 07:44:13.679531  412953 command_runner.go:130] > #   conmon-rs by using:
	I1210 07:44:13.679539  412953 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1210 07:44:13.679560  412953 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1210 07:44:13.679570  412953 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1210 07:44:13.679579  412953 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1210 07:44:13.679584  412953 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1210 07:44:13.679593  412953 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1210 07:44:13.679603  412953 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1210 07:44:13.679608  412953 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1210 07:44:13.679617  412953 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1210 07:44:13.679637  412953 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1210 07:44:13.679641  412953 command_runner.go:130] > #   when a machine crash happens.
	I1210 07:44:13.679649  412953 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1210 07:44:13.679659  412953 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1210 07:44:13.679667  412953 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1210 07:44:13.679675  412953 command_runner.go:130] > #   seccomp profile for the runtime.
	I1210 07:44:13.679681  412953 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1210 07:44:13.679700  412953 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1210 07:44:13.679707  412953 command_runner.go:130] > #
	I1210 07:44:13.679712  412953 command_runner.go:130] > # Using the seccomp notifier feature:
	I1210 07:44:13.679716  412953 command_runner.go:130] > #
	I1210 07:44:13.679727  412953 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1210 07:44:13.679736  412953 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1210 07:44:13.679742  412953 command_runner.go:130] > #
	I1210 07:44:13.679749  412953 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1210 07:44:13.679756  412953 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1210 07:44:13.679761  412953 command_runner.go:130] > #
	I1210 07:44:13.679773  412953 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1210 07:44:13.679780  412953 command_runner.go:130] > # feature.
	I1210 07:44:13.679782  412953 command_runner.go:130] > #
	I1210 07:44:13.679788  412953 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1210 07:44:13.679799  412953 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1210 07:44:13.679805  412953 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1210 07:44:13.679811  412953 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1210 07:44:13.679819  412953 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1210 07:44:13.679824  412953 command_runner.go:130] > #
	I1210 07:44:13.679831  412953 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1210 07:44:13.679840  412953 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1210 07:44:13.679848  412953 command_runner.go:130] > #
	I1210 07:44:13.679858  412953 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1210 07:44:13.679864  412953 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1210 07:44:13.679869  412953 command_runner.go:130] > #
	I1210 07:44:13.679875  412953 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1210 07:44:13.679881  412953 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1210 07:44:13.679887  412953 command_runner.go:130] > # limitation.
	I1210 07:44:13.679891  412953 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1210 07:44:13.679896  412953 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1210 07:44:13.679902  412953 command_runner.go:130] > runtime_type = ""
	I1210 07:44:13.679909  412953 command_runner.go:130] > runtime_root = "/run/crun"
	I1210 07:44:13.679913  412953 command_runner.go:130] > inherit_default_runtime = false
	I1210 07:44:13.679932  412953 command_runner.go:130] > runtime_config_path = ""
	I1210 07:44:13.679940  412953 command_runner.go:130] > container_min_memory = ""
	I1210 07:44:13.679944  412953 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1210 07:44:13.679948  412953 command_runner.go:130] > monitor_cgroup = "pod"
	I1210 07:44:13.679957  412953 command_runner.go:130] > monitor_exec_cgroup = ""
	I1210 07:44:13.679961  412953 command_runner.go:130] > allowed_annotations = [
	I1210 07:44:13.680169  412953 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1210 07:44:13.680183  412953 command_runner.go:130] > ]
	I1210 07:44:13.680190  412953 command_runner.go:130] > privileged_without_host_devices = false
	I1210 07:44:13.680195  412953 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1210 07:44:13.680200  412953 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1210 07:44:13.680204  412953 command_runner.go:130] > runtime_type = ""
	I1210 07:44:13.680218  412953 command_runner.go:130] > runtime_root = "/run/runc"
	I1210 07:44:13.680228  412953 command_runner.go:130] > inherit_default_runtime = false
	I1210 07:44:13.680233  412953 command_runner.go:130] > runtime_config_path = ""
	I1210 07:44:13.680237  412953 command_runner.go:130] > container_min_memory = ""
	I1210 07:44:13.680244  412953 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1210 07:44:13.680248  412953 command_runner.go:130] > monitor_cgroup = "pod"
	I1210 07:44:13.680257  412953 command_runner.go:130] > monitor_exec_cgroup = ""
	I1210 07:44:13.680461  412953 command_runner.go:130] > privileged_without_host_devices = false
	I1210 07:44:13.680480  412953 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1210 07:44:13.680486  412953 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1210 07:44:13.680503  412953 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1210 07:44:13.680522  412953 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1210 07:44:13.680533  412953 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1210 07:44:13.680547  412953 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1210 07:44:13.680554  412953 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1210 07:44:13.680563  412953 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1210 07:44:13.680579  412953 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1210 07:44:13.680591  412953 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1210 07:44:13.680597  412953 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1210 07:44:13.680609  412953 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1210 07:44:13.680613  412953 command_runner.go:130] > # Example:
	I1210 07:44:13.680617  412953 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1210 07:44:13.680625  412953 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1210 07:44:13.680632  412953 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1210 07:44:13.680643  412953 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1210 07:44:13.680656  412953 command_runner.go:130] > # cpuset = "0-1"
	I1210 07:44:13.680660  412953 command_runner.go:130] > # cpushares = "5"
	I1210 07:44:13.680672  412953 command_runner.go:130] > # cpuquota = "1000"
	I1210 07:44:13.680676  412953 command_runner.go:130] > # cpuperiod = "100000"
	I1210 07:44:13.680680  412953 command_runner.go:130] > # cpulimit = "35"
	I1210 07:44:13.680686  412953 command_runner.go:130] > # Where:
	I1210 07:44:13.680691  412953 command_runner.go:130] > # The workload name is workload-type.
	I1210 07:44:13.680706  412953 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1210 07:44:13.680717  412953 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1210 07:44:13.680730  412953 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1210 07:44:13.680742  412953 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1210 07:44:13.680748  412953 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1210 07:44:13.680756  412953 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1210 07:44:13.680763  412953 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1210 07:44:13.680767  412953 command_runner.go:130] > # Default value is set to true
	I1210 07:44:13.681004  412953 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1210 07:44:13.681022  412953 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1210 07:44:13.681028  412953 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1210 07:44:13.681032  412953 command_runner.go:130] > # Default value is set to 'false'
	I1210 07:44:13.681046  412953 command_runner.go:130] > # disable_hostport_mapping = false
	I1210 07:44:13.681057  412953 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1210 07:44:13.681066  412953 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1210 07:44:13.681072  412953 command_runner.go:130] > # timezone = ""
	I1210 07:44:13.681078  412953 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1210 07:44:13.681082  412953 command_runner.go:130] > #
	I1210 07:44:13.681089  412953 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1210 07:44:13.681101  412953 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1210 07:44:13.681105  412953 command_runner.go:130] > [crio.image]
	I1210 07:44:13.681112  412953 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1210 07:44:13.681133  412953 command_runner.go:130] > # default_transport = "docker://"
	I1210 07:44:13.681145  412953 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1210 07:44:13.681152  412953 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1210 07:44:13.681158  412953 command_runner.go:130] > # global_auth_file = ""
	I1210 07:44:13.681163  412953 command_runner.go:130] > # The image used to instantiate infra containers.
	I1210 07:44:13.681168  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.681175  412953 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1210 07:44:13.681182  412953 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1210 07:44:13.681198  412953 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1210 07:44:13.681207  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.681403  412953 command_runner.go:130] > # pause_image_auth_file = ""
	I1210 07:44:13.681421  412953 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1210 07:44:13.681429  412953 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1210 07:44:13.681436  412953 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1210 07:44:13.681442  412953 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1210 07:44:13.681460  412953 command_runner.go:130] > # pause_command = "/pause"
	I1210 07:44:13.681466  412953 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1210 07:44:13.681473  412953 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1210 07:44:13.681481  412953 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1210 07:44:13.681487  412953 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1210 07:44:13.681495  412953 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1210 07:44:13.681508  412953 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1210 07:44:13.681512  412953 command_runner.go:130] > # pinned_images = [
	I1210 07:44:13.681700  412953 command_runner.go:130] > # ]
	I1210 07:44:13.681712  412953 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1210 07:44:13.681720  412953 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1210 07:44:13.681726  412953 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1210 07:44:13.681733  412953 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1210 07:44:13.681759  412953 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1210 07:44:13.681771  412953 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1210 07:44:13.681777  412953 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1210 07:44:13.681786  412953 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1210 07:44:13.681793  412953 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1210 07:44:13.681800  412953 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1210 07:44:13.681806  412953 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1210 07:44:13.682016  412953 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1210 07:44:13.682034  412953 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1210 07:44:13.682042  412953 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1210 07:44:13.682046  412953 command_runner.go:130] > # changing them here.
	I1210 07:44:13.682052  412953 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1210 07:44:13.682069  412953 command_runner.go:130] > # insecure_registries = [
	I1210 07:44:13.682078  412953 command_runner.go:130] > # ]
	I1210 07:44:13.682085  412953 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1210 07:44:13.682090  412953 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1210 07:44:13.682257  412953 command_runner.go:130] > # image_volumes = "mkdir"
	I1210 07:44:13.682273  412953 command_runner.go:130] > # Temporary directory to use for storing big files
	I1210 07:44:13.682285  412953 command_runner.go:130] > # big_files_temporary_dir = ""
	I1210 07:44:13.682292  412953 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1210 07:44:13.682299  412953 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1210 07:44:13.682504  412953 command_runner.go:130] > # auto_reload_registries = false
	I1210 07:44:13.682520  412953 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1210 07:44:13.682532  412953 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1210 07:44:13.682540  412953 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1210 07:44:13.682567  412953 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1210 07:44:13.682578  412953 command_runner.go:130] > # The mode of short name resolution.
	I1210 07:44:13.682585  412953 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1210 07:44:13.682595  412953 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1210 07:44:13.682600  412953 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1210 07:44:13.682615  412953 command_runner.go:130] > # short_name_mode = "enforcing"
	I1210 07:44:13.682622  412953 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1210 07:44:13.682630  412953 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1210 07:44:13.683045  412953 command_runner.go:130] > # oci_artifact_mount_support = true
	I1210 07:44:13.683063  412953 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1210 07:44:13.683080  412953 command_runner.go:130] > # CNI plugins.
	I1210 07:44:13.683084  412953 command_runner.go:130] > [crio.network]
	I1210 07:44:13.683091  412953 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1210 07:44:13.683100  412953 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1210 07:44:13.683104  412953 command_runner.go:130] > # cni_default_network = ""
	I1210 07:44:13.683110  412953 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1210 07:44:13.683116  412953 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1210 07:44:13.683122  412953 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1210 07:44:13.683126  412953 command_runner.go:130] > # plugin_dirs = [
	I1210 07:44:13.683439  412953 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1210 07:44:13.683727  412953 command_runner.go:130] > # ]
	I1210 07:44:13.683742  412953 command_runner.go:130] > # List of included pod metrics.
	I1210 07:44:13.684014  412953 command_runner.go:130] > # included_pod_metrics = [
	I1210 07:44:13.684312  412953 command_runner.go:130] > # ]
	I1210 07:44:13.684328  412953 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1210 07:44:13.684333  412953 command_runner.go:130] > [crio.metrics]
	I1210 07:44:13.684339  412953 command_runner.go:130] > # Globally enable or disable metrics support.
	I1210 07:44:13.684905  412953 command_runner.go:130] > # enable_metrics = false
	I1210 07:44:13.684921  412953 command_runner.go:130] > # Specify enabled metrics collectors.
	I1210 07:44:13.684926  412953 command_runner.go:130] > # Per default all metrics are enabled.
	I1210 07:44:13.684933  412953 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1210 07:44:13.684946  412953 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1210 07:44:13.684969  412953 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1210 07:44:13.685240  412953 command_runner.go:130] > # metrics_collectors = [
	I1210 07:44:13.685580  412953 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1210 07:44:13.685893  412953 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1210 07:44:13.686203  412953 command_runner.go:130] > # 	"containers_oom_total",
	I1210 07:44:13.686514  412953 command_runner.go:130] > # 	"processes_defunct",
	I1210 07:44:13.686821  412953 command_runner.go:130] > # 	"operations_total",
	I1210 07:44:13.687152  412953 command_runner.go:130] > # 	"operations_latency_seconds",
	I1210 07:44:13.687476  412953 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1210 07:44:13.687786  412953 command_runner.go:130] > # 	"operations_errors_total",
	I1210 07:44:13.688090  412953 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1210 07:44:13.688395  412953 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1210 07:44:13.688727  412953 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1210 07:44:13.689070  412953 command_runner.go:130] > # 	"image_pulls_success_total",
	I1210 07:44:13.689083  412953 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1210 07:44:13.689089  412953 command_runner.go:130] > # 	"containers_oom_count_total",
	I1210 07:44:13.689093  412953 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1210 07:44:13.689098  412953 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1210 07:44:13.689634  412953 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1210 07:44:13.689646  412953 command_runner.go:130] > # ]
	I1210 07:44:13.689654  412953 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1210 07:44:13.689658  412953 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1210 07:44:13.689671  412953 command_runner.go:130] > # The port on which the metrics server will listen.
	I1210 07:44:13.689696  412953 command_runner.go:130] > # metrics_port = 9090
	I1210 07:44:13.689701  412953 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1210 07:44:13.689706  412953 command_runner.go:130] > # metrics_socket = ""
	I1210 07:44:13.689716  412953 command_runner.go:130] > # The certificate for the secure metrics server.
	I1210 07:44:13.689722  412953 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1210 07:44:13.689731  412953 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1210 07:44:13.689737  412953 command_runner.go:130] > # certificate on any modification event.
	I1210 07:44:13.689741  412953 command_runner.go:130] > # metrics_cert = ""
	I1210 07:44:13.689746  412953 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1210 07:44:13.689751  412953 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1210 07:44:13.689764  412953 command_runner.go:130] > # metrics_key = ""
	I1210 07:44:13.689770  412953 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1210 07:44:13.689774  412953 command_runner.go:130] > [crio.tracing]
	I1210 07:44:13.689781  412953 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1210 07:44:13.689785  412953 command_runner.go:130] > # enable_tracing = false
	I1210 07:44:13.689792  412953 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1210 07:44:13.689799  412953 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1210 07:44:13.689806  412953 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1210 07:44:13.689833  412953 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1210 07:44:13.689842  412953 command_runner.go:130] > # CRI-O NRI configuration.
	I1210 07:44:13.689845  412953 command_runner.go:130] > [crio.nri]
	I1210 07:44:13.689850  412953 command_runner.go:130] > # Globally enable or disable NRI.
	I1210 07:44:13.689861  412953 command_runner.go:130] > # enable_nri = true
	I1210 07:44:13.689865  412953 command_runner.go:130] > # NRI socket to listen on.
	I1210 07:44:13.689873  412953 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1210 07:44:13.689877  412953 command_runner.go:130] > # NRI plugin directory to use.
	I1210 07:44:13.689882  412953 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1210 07:44:13.689890  412953 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1210 07:44:13.689894  412953 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1210 07:44:13.689900  412953 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1210 07:44:13.689965  412953 command_runner.go:130] > # nri_disable_connections = false
	I1210 07:44:13.689975  412953 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1210 07:44:13.689991  412953 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1210 07:44:13.689997  412953 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1210 07:44:13.690006  412953 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1210 07:44:13.690011  412953 command_runner.go:130] > # NRI default validator configuration.
	I1210 07:44:13.690018  412953 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1210 07:44:13.690027  412953 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1210 07:44:13.690036  412953 command_runner.go:130] > # can be restricted/rejected:
	I1210 07:44:13.690044  412953 command_runner.go:130] > # - OCI hook injection
	I1210 07:44:13.690060  412953 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1210 07:44:13.690068  412953 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1210 07:44:13.690072  412953 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1210 07:44:13.690076  412953 command_runner.go:130] > # - adjustment of linux namespaces
	I1210 07:44:13.690083  412953 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1210 07:44:13.690093  412953 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1210 07:44:13.690099  412953 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1210 07:44:13.690107  412953 command_runner.go:130] > #
	I1210 07:44:13.690111  412953 command_runner.go:130] > # [crio.nri.default_validator]
	I1210 07:44:13.690115  412953 command_runner.go:130] > # nri_enable_default_validator = false
	I1210 07:44:13.690122  412953 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1210 07:44:13.690134  412953 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1210 07:44:13.690148  412953 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1210 07:44:13.690154  412953 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1210 07:44:13.690159  412953 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1210 07:44:13.690165  412953 command_runner.go:130] > # nri_validator_required_plugins = [
	I1210 07:44:13.690168  412953 command_runner.go:130] > # ]
	I1210 07:44:13.690174  412953 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1210 07:44:13.690182  412953 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1210 07:44:13.690192  412953 command_runner.go:130] > [crio.stats]
	I1210 07:44:13.690198  412953 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1210 07:44:13.690212  412953 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1210 07:44:13.690219  412953 command_runner.go:130] > # stats_collection_period = 0
	I1210 07:44:13.690225  412953 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1210 07:44:13.690232  412953 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1210 07:44:13.690243  412953 command_runner.go:130] > # collection_period = 0
	I1210 07:44:13.692149  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.648702659Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1210 07:44:13.692177  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.648881459Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1210 07:44:13.692188  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.648978856Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1210 07:44:13.692196  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.649067965Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1210 07:44:13.692212  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.649235303Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.692221  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.649618857Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1210 07:44:13.692237  412953 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1210 07:44:13.692317  412953 cni.go:84] Creating CNI manager for ""
	I1210 07:44:13.692335  412953 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:44:13.692359  412953 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:44:13.692385  412953 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-314220 NodeName:functional-314220 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:44:13.692523  412953 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-314220"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:44:13.692606  412953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:44:13.699318  412953 command_runner.go:130] > kubeadm
	I1210 07:44:13.699338  412953 command_runner.go:130] > kubectl
	I1210 07:44:13.699343  412953 command_runner.go:130] > kubelet
	I1210 07:44:13.700197  412953 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:44:13.700295  412953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:44:13.707538  412953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 07:44:13.720130  412953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:44:13.732445  412953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1210 07:44:13.744899  412953 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:44:13.748570  412953 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 07:44:13.748818  412953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:44:13.875367  412953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:44:13.911048  412953 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220 for IP: 192.168.49.2
	I1210 07:44:13.911077  412953 certs.go:195] generating shared ca certs ...
	I1210 07:44:13.911094  412953 certs.go:227] acquiring lock for ca certs: {Name:mk8f0c12407c084bd20b81428ac0dabca9a8cbd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:44:13.911231  412953 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key
	I1210 07:44:13.911285  412953 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key
	I1210 07:44:13.911297  412953 certs.go:257] generating profile certs ...
	I1210 07:44:13.911404  412953 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key
	I1210 07:44:13.911477  412953 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key.8ae59347
	I1210 07:44:13.911525  412953 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key
	I1210 07:44:13.911539  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 07:44:13.911552  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 07:44:13.911567  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 07:44:13.911578  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 07:44:13.911593  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 07:44:13.911610  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 07:44:13.911622  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 07:44:13.911637  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 07:44:13.911683  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem (1338 bytes)
	W1210 07:44:13.911717  412953 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528_empty.pem, impossibly tiny 0 bytes
	I1210 07:44:13.911729  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:44:13.911762  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:44:13.911791  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:44:13.911819  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem (1675 bytes)
	I1210 07:44:13.911865  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 07:44:13.911900  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> /usr/share/ca-certificates/3785282.pem
	I1210 07:44:13.911918  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:13.911928  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem -> /usr/share/ca-certificates/378528.pem
	I1210 07:44:13.912577  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:44:13.931574  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 07:44:13.949287  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:44:13.966704  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:44:13.984537  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:44:14.005273  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:44:14.024726  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:44:14.043246  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:44:14.061500  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /usr/share/ca-certificates/3785282.pem (1708 bytes)
	I1210 07:44:14.078597  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:44:14.096003  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem --> /usr/share/ca-certificates/378528.pem (1338 bytes)
	I1210 07:44:14.113316  412953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:44:14.125784  412953 ssh_runner.go:195] Run: openssl version
	I1210 07:44:14.132223  412953 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 07:44:14.132300  412953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.139621  412953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3785282.pem /etc/ssl/certs/3785282.pem
	I1210 07:44:14.146891  412953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.150749  412953 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 07:35 /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.150804  412953 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 07:35 /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.150854  412953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.191223  412953 command_runner.go:130] > 3ec20f2e
	I1210 07:44:14.191672  412953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:44:14.199095  412953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.206573  412953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:44:14.214321  412953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.218345  412953 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 07:25 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.218446  412953 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 07:25 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.218516  412953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.259240  412953 command_runner.go:130] > b5213941
	I1210 07:44:14.259776  412953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:44:14.267399  412953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.274814  412953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/378528.pem /etc/ssl/certs/378528.pem
	I1210 07:44:14.282253  412953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.286034  412953 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 07:35 /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.286101  412953 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 07:35 /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.286170  412953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.327536  412953 command_runner.go:130] > 51391683
	I1210 07:44:14.327674  412953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:44:14.335034  412953 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:44:14.338581  412953 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:44:14.338609  412953 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1210 07:44:14.338616  412953 command_runner.go:130] > Device: 259,1	Inode: 1322411     Links: 1
	I1210 07:44:14.338623  412953 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 07:44:14.338628  412953 command_runner.go:130] > Access: 2025-12-10 07:40:07.276287392 +0000
	I1210 07:44:14.338634  412953 command_runner.go:130] > Modify: 2025-12-10 07:36:02.667723996 +0000
	I1210 07:44:14.338639  412953 command_runner.go:130] > Change: 2025-12-10 07:36:02.667723996 +0000
	I1210 07:44:14.338644  412953 command_runner.go:130] >  Birth: 2025-12-10 07:36:02.667723996 +0000
	I1210 07:44:14.338702  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:44:14.379186  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.379683  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:44:14.420781  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.421255  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:44:14.461926  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.462055  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:44:14.509912  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.510522  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:44:14.558004  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.558477  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:44:14.599044  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.599455  412953 kubeadm.go:401] StartCluster: {Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:44:14.599550  412953 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:44:14.599615  412953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:44:14.630244  412953 cri.go:89] found id: ""
	I1210 07:44:14.630352  412953 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:44:14.638132  412953 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 07:44:14.638152  412953 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 07:44:14.638158  412953 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 07:44:14.638171  412953 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:44:14.638176  412953 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:44:14.638225  412953 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:44:14.645608  412953 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:44:14.646002  412953 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-314220" does not appear in /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:14.646112  412953 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-376671/kubeconfig needs updating (will repair): [kubeconfig missing "functional-314220" cluster setting kubeconfig missing "functional-314220" context setting]
	I1210 07:44:14.646387  412953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/kubeconfig: {Name:mk62f1b4a63164643ed31a5211abdadfc49db863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:44:14.646808  412953 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:14.646962  412953 kapi.go:59] client config for functional-314220: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key", CAFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 07:44:14.647769  412953 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 07:44:14.647791  412953 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 07:44:14.647797  412953 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 07:44:14.647801  412953 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 07:44:14.647806  412953 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 07:44:14.647858  412953 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 07:44:14.648134  412953 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:44:14.656007  412953 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1210 07:44:14.656041  412953 kubeadm.go:602] duration metric: took 17.859608ms to restartPrimaryControlPlane
	I1210 07:44:14.656051  412953 kubeadm.go:403] duration metric: took 56.601079ms to StartCluster
	I1210 07:44:14.656066  412953 settings.go:142] acquiring lock: {Name:mk83336eaf1e9f7632e16e15e8d9e14eb0e0d0c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:44:14.656132  412953 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:14.656799  412953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/kubeconfig: {Name:mk62f1b4a63164643ed31a5211abdadfc49db863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:44:14.657004  412953 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 07:44:14.657416  412953 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:44:14.657431  412953 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:44:14.658092  412953 addons.go:70] Setting storage-provisioner=true in profile "functional-314220"
	I1210 07:44:14.658110  412953 addons.go:239] Setting addon storage-provisioner=true in "functional-314220"
	I1210 07:44:14.658137  412953 host.go:66] Checking if "functional-314220" exists ...
	I1210 07:44:14.658702  412953 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:44:14.665050  412953 addons.go:70] Setting default-storageclass=true in profile "functional-314220"
	I1210 07:44:14.665125  412953 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-314220"
	I1210 07:44:14.665550  412953 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:44:14.671074  412953 out.go:179] * Verifying Kubernetes components...
	I1210 07:44:14.676445  412953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:44:14.698425  412953 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:44:14.701187  412953 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:14.701211  412953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:44:14.701278  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:14.705662  412953 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:14.705841  412953 kapi.go:59] client config for functional-314220: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key", CAFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 07:44:14.706176  412953 addons.go:239] Setting addon default-storageclass=true in "functional-314220"
	I1210 07:44:14.706207  412953 host.go:66] Checking if "functional-314220" exists ...
	I1210 07:44:14.706646  412953 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:44:14.744732  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:14.744810  412953 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:14.744830  412953 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:44:14.744900  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:14.778977  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:14.876345  412953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:44:14.912899  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:14.922881  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:15.662190  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:15.662227  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.662277  412953 retry.go:31] will retry after 311.954263ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.662347  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:15.662381  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.662389  412953 retry.go:31] will retry after 234.07921ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.662447  412953 node_ready.go:35] waiting up to 6m0s for node "functional-314220" to be "Ready" ...
	I1210 07:44:15.662665  412953 type.go:168] "Request Body" body=""
	I1210 07:44:15.662765  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:15.663157  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:15.897488  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:15.957295  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:15.957408  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.957431  412953 retry.go:31] will retry after 307.155853ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.974530  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:16.030916  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:16.034621  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.034655  412953 retry.go:31] will retry after 246.948718ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.162840  412953 type.go:168] "Request Body" body=""
	I1210 07:44:16.162973  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:16.163310  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:16.265735  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:16.282284  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:16.335651  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:16.339071  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.339103  412953 retry.go:31] will retry after 647.058742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.361763  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:16.361804  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.361822  412953 retry.go:31] will retry after 514.560746ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.663231  412953 type.go:168] "Request Body" body=""
	I1210 07:44:16.663327  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:16.663641  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:16.877219  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:16.942769  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:16.942876  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.942918  412953 retry.go:31] will retry after 1.098847883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.987296  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:17.051987  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:17.055923  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:17.055964  412953 retry.go:31] will retry after 522.145884ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:17.163324  412953 type.go:168] "Request Body" body=""
	I1210 07:44:17.163405  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:17.163711  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:17.578391  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:17.635896  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:17.639746  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:17.639777  412953 retry.go:31] will retry after 768.766099ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:17.662946  412953 type.go:168] "Request Body" body=""
	I1210 07:44:17.663049  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:17.663399  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:17.663474  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:18.042986  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:18.101043  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:18.104777  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:18.104811  412953 retry.go:31] will retry after 877.527078ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:18.163066  412953 type.go:168] "Request Body" body=""
	I1210 07:44:18.163146  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:18.163494  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:18.409040  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:18.473157  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:18.473195  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:18.473221  412953 retry.go:31] will retry after 1.043117699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:18.663503  412953 type.go:168] "Request Body" body=""
	I1210 07:44:18.663629  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:18.663908  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:18.983598  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:19.054379  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:19.057795  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:19.057861  412953 retry.go:31] will retry after 2.806616267s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:19.163140  412953 type.go:168] "Request Body" body=""
	I1210 07:44:19.163219  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:19.163514  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:19.517094  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:19.577109  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:19.577146  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:19.577191  412953 retry.go:31] will retry after 2.260515502s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:19.663401  412953 type.go:168] "Request Body" body=""
	I1210 07:44:19.663487  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:19.663841  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:19.663910  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:20.163656  412953 type.go:168] "Request Body" body=""
	I1210 07:44:20.163728  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:20.164096  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:20.662791  412953 type.go:168] "Request Body" body=""
	I1210 07:44:20.662881  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:20.663196  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:21.162812  412953 type.go:168] "Request Body" body=""
	I1210 07:44:21.162886  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:21.163185  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:21.662729  412953 type.go:168] "Request Body" body=""
	I1210 07:44:21.662808  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:21.663095  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:21.838627  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:21.865153  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:21.916464  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:21.916504  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:21.916523  412953 retry.go:31] will retry after 2.650338189s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:21.931641  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:21.931686  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:21.931712  412953 retry.go:31] will retry after 2.932548046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:22.163174  412953 type.go:168] "Request Body" body=""
	I1210 07:44:22.163252  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:22.163593  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:22.163668  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:22.663491  412953 type.go:168] "Request Body" body=""
	I1210 07:44:22.663596  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:22.663955  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:23.162683  412953 type.go:168] "Request Body" body=""
	I1210 07:44:23.162754  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:23.163096  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:23.662729  412953 type.go:168] "Request Body" body=""
	I1210 07:44:23.662804  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:23.663201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:24.162801  412953 type.go:168] "Request Body" body=""
	I1210 07:44:24.162914  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:24.163280  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:24.567824  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:24.621746  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:24.625216  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:24.625246  412953 retry.go:31] will retry after 7.727905191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:24.663687  412953 type.go:168] "Request Body" body=""
	I1210 07:44:24.663760  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:24.664012  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:24.664064  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:24.864476  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:24.921495  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:24.921557  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:24.921581  412953 retry.go:31] will retry after 3.915945796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:25.162916  412953 type.go:168] "Request Body" body=""
	I1210 07:44:25.162995  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:25.163327  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:25.663045  412953 type.go:168] "Request Body" body=""
	I1210 07:44:25.663124  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:25.663415  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:26.163115  412953 type.go:168] "Request Body" body=""
	I1210 07:44:26.163196  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:26.163520  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:26.663439  412953 type.go:168] "Request Body" body=""
	I1210 07:44:26.663518  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:26.663824  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:27.163578  412953 type.go:168] "Request Body" body=""
	I1210 07:44:27.163681  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:27.164000  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:27.164069  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:27.662756  412953 type.go:168] "Request Body" body=""
	I1210 07:44:27.662823  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:27.663205  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:28.162790  412953 type.go:168] "Request Body" body=""
	I1210 07:44:28.162864  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:28.163219  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:28.662792  412953 type.go:168] "Request Body" body=""
	I1210 07:44:28.662869  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:28.663208  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:28.838651  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:28.899244  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:28.899280  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:28.899298  412953 retry.go:31] will retry after 8.041674514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:29.162702  412953 type.go:168] "Request Body" body=""
	I1210 07:44:29.162772  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:29.163052  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:29.662768  412953 type.go:168] "Request Body" body=""
	I1210 07:44:29.662841  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:29.663169  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:29.663226  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:30.162886  412953 type.go:168] "Request Body" body=""
	I1210 07:44:30.162968  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:30.163308  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:30.662996  412953 type.go:168] "Request Body" body=""
	I1210 07:44:30.663089  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:30.663373  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:31.163117  412953 type.go:168] "Request Body" body=""
	I1210 07:44:31.163198  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:31.163590  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:31.662807  412953 type.go:168] "Request Body" body=""
	I1210 07:44:31.662884  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:31.663200  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:31.663248  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:32.162759  412953 type.go:168] "Request Body" body=""
	I1210 07:44:32.162841  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:32.163175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:32.353668  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:32.409993  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:32.413403  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:32.413432  412953 retry.go:31] will retry after 6.914628842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:32.662780  412953 type.go:168] "Request Body" body=""
	I1210 07:44:32.662856  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:32.663233  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:33.162956  412953 type.go:168] "Request Body" body=""
	I1210 07:44:33.163049  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:33.163356  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:33.662689  412953 type.go:168] "Request Body" body=""
	I1210 07:44:33.662755  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:33.663031  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:34.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:44:34.162848  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:34.163199  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:34.163258  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:34.663111  412953 type.go:168] "Request Body" body=""
	I1210 07:44:34.663191  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:34.663487  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:35.163272  412953 type.go:168] "Request Body" body=""
	I1210 07:44:35.163341  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:35.163701  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:35.663625  412953 type.go:168] "Request Body" body=""
	I1210 07:44:35.663709  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:35.664060  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:36.162762  412953 type.go:168] "Request Body" body=""
	I1210 07:44:36.162838  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:36.163191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:36.663557  412953 type.go:168] "Request Body" body=""
	I1210 07:44:36.663625  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:36.663891  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:36.663931  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:36.941565  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:36.998306  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:37.009698  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:37.009736  412953 retry.go:31] will retry after 8.728706472s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:37.163096  412953 type.go:168] "Request Body" body=""
	I1210 07:44:37.163180  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:37.163526  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:37.663088  412953 type.go:168] "Request Body" body=""
	I1210 07:44:37.663168  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:37.663465  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:38.162753  412953 type.go:168] "Request Body" body=""
	I1210 07:44:38.162830  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:38.163170  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:38.662738  412953 type.go:168] "Request Body" body=""
	I1210 07:44:38.662816  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:38.663200  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:39.162911  412953 type.go:168] "Request Body" body=""
	I1210 07:44:39.162982  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:39.163311  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:39.163365  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:39.328689  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:39.391413  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:39.391461  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:39.391479  412953 retry.go:31] will retry after 20.069023813s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:39.663623  412953 type.go:168] "Request Body" body=""
	I1210 07:44:39.663692  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:39.664007  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:40.163717  412953 type.go:168] "Request Body" body=""
	I1210 07:44:40.163789  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:40.164098  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:40.662854  412953 type.go:168] "Request Body" body=""
	I1210 07:44:40.662926  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:40.663261  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:41.163240  412953 type.go:168] "Request Body" body=""
	I1210 07:44:41.163310  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:41.163588  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:41.163638  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:41.663374  412953 type.go:168] "Request Body" body=""
	I1210 07:44:41.663448  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:41.663787  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:42.163614  412953 type.go:168] "Request Body" body=""
	I1210 07:44:42.163700  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:42.164110  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:42.662817  412953 type.go:168] "Request Body" body=""
	I1210 07:44:42.662893  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:42.663220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:43.162788  412953 type.go:168] "Request Body" body=""
	I1210 07:44:43.162930  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:43.163267  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:43.662791  412953 type.go:168] "Request Body" body=""
	I1210 07:44:43.662874  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:43.663242  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:43.663300  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:44.162963  412953 type.go:168] "Request Body" body=""
	I1210 07:44:44.163057  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:44.163345  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:44.663123  412953 type.go:168] "Request Body" body=""
	I1210 07:44:44.663199  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:44.663539  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:45.163160  412953 type.go:168] "Request Body" body=""
	I1210 07:44:45.163248  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:45.163618  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:45.663558  412953 type.go:168] "Request Body" body=""
	I1210 07:44:45.663640  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:45.663930  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:45.663983  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:45.739308  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:45.803966  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:45.804014  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:45.804032  412953 retry.go:31] will retry after 15.619557427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:46.163368  412953 type.go:168] "Request Body" body=""
	I1210 07:44:46.163449  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:46.163809  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:46.663723  412953 type.go:168] "Request Body" body=""
	I1210 07:44:46.663804  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:46.664157  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:47.162830  412953 type.go:168] "Request Body" body=""
	I1210 07:44:47.162904  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:47.163246  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:47.662803  412953 type.go:168] "Request Body" body=""
	I1210 07:44:47.662878  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:47.663201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:48.162784  412953 type.go:168] "Request Body" body=""
	I1210 07:44:48.162868  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:48.163231  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:48.163295  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:48.662914  412953 type.go:168] "Request Body" body=""
	I1210 07:44:48.662989  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:48.663322  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:49.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:44:49.162856  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:49.163180  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:49.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:44:49.662861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:49.663191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:50.162736  412953 type.go:168] "Request Body" body=""
	I1210 07:44:50.162810  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:50.163100  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:50.662994  412953 type.go:168] "Request Body" body=""
	I1210 07:44:50.663140  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:50.663480  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:50.663536  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:51.163315  412953 type.go:168] "Request Body" body=""
	I1210 07:44:51.163397  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:51.163738  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:51.663484  412953 type.go:168] "Request Body" body=""
	I1210 07:44:51.663554  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:51.663817  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:52.163592  412953 type.go:168] "Request Body" body=""
	I1210 07:44:52.163675  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:52.164024  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:52.663725  412953 type.go:168] "Request Body" body=""
	I1210 07:44:52.663805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:52.664170  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:52.664269  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:53.162915  412953 type.go:168] "Request Body" body=""
	I1210 07:44:53.162989  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:53.163353  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:53.662767  412953 type.go:168] "Request Body" body=""
	I1210 07:44:53.662844  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:53.663173  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:54.162859  412953 type.go:168] "Request Body" body=""
	I1210 07:44:54.162935  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:54.163287  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:54.663094  412953 type.go:168] "Request Body" body=""
	I1210 07:44:54.663170  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:54.663454  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:55.163141  412953 type.go:168] "Request Body" body=""
	I1210 07:44:55.163215  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:55.163544  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:55.163602  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:55.663472  412953 type.go:168] "Request Body" body=""
	I1210 07:44:55.663544  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:55.663857  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:56.163640  412953 type.go:168] "Request Body" body=""
	I1210 07:44:56.163716  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:56.163996  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:56.662721  412953 type.go:168] "Request Body" body=""
	I1210 07:44:56.662805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:56.663197  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:57.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:44:57.162851  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:57.163199  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:57.662711  412953 type.go:168] "Request Body" body=""
	I1210 07:44:57.662787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:57.663158  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:57.663213  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:58.162868  412953 type.go:168] "Request Body" body=""
	I1210 07:44:58.162941  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:58.163282  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:58.662753  412953 type.go:168] "Request Body" body=""
	I1210 07:44:58.662830  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:58.663191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:59.162735  412953 type.go:168] "Request Body" body=""
	I1210 07:44:59.162805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:59.163143  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:59.460756  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:59.515959  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:59.519313  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:59.519349  412953 retry.go:31] will retry after 28.214559207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:59.663650  412953 type.go:168] "Request Body" body=""
	I1210 07:44:59.663726  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:59.664046  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:59.664099  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:00.162860  412953 type.go:168] "Request Body" body=""
	I1210 07:45:00.162952  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:00.163293  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:00.671201  412953 type.go:168] "Request Body" body=""
	I1210 07:45:00.671283  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:00.671619  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:01.163405  412953 type.go:168] "Request Body" body=""
	I1210 07:45:01.163498  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:01.163856  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:01.424291  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:45:01.504370  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:01.504408  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:01.504426  412953 retry.go:31] will retry after 11.28420248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:01.662884  412953 type.go:168] "Request Body" body=""
	I1210 07:45:01.662972  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:01.663364  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:02.162730  412953 type.go:168] "Request Body" body=""
	I1210 07:45:02.162802  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:02.163079  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:02.163130  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:02.662859  412953 type.go:168] "Request Body" body=""
	I1210 07:45:02.662943  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:02.663293  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:03.163040  412953 type.go:168] "Request Body" body=""
	I1210 07:45:03.163132  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:03.163447  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:03.663127  412953 type.go:168] "Request Body" body=""
	I1210 07:45:03.663202  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:03.663476  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:04.162835  412953 type.go:168] "Request Body" body=""
	I1210 07:45:04.162911  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:04.163270  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:04.163341  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:04.663266  412953 type.go:168] "Request Body" body=""
	I1210 07:45:04.663342  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:04.663667  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:05.163290  412953 type.go:168] "Request Body" body=""
	I1210 07:45:05.163383  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:05.163646  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:05.663681  412953 type.go:168] "Request Body" body=""
	I1210 07:45:05.663763  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:05.664122  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:06.162850  412953 type.go:168] "Request Body" body=""
	I1210 07:45:06.162937  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:06.163320  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:06.163376  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:06.663060  412953 type.go:168] "Request Body" body=""
	I1210 07:45:06.663135  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:06.663501  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:07.163345  412953 type.go:168] "Request Body" body=""
	I1210 07:45:07.163422  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:07.163774  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:07.663630  412953 type.go:168] "Request Body" body=""
	I1210 07:45:07.663719  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:07.664148  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:08.162743  412953 type.go:168] "Request Body" body=""
	I1210 07:45:08.162831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:08.163182  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:08.662764  412953 type.go:168] "Request Body" body=""
	I1210 07:45:08.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:08.663190  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:08.663246  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:09.162931  412953 type.go:168] "Request Body" body=""
	I1210 07:45:09.163030  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:09.163428  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:09.663118  412953 type.go:168] "Request Body" body=""
	I1210 07:45:09.663196  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:09.663528  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:10.163340  412953 type.go:168] "Request Body" body=""
	I1210 07:45:10.163412  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:10.163740  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:10.663633  412953 type.go:168] "Request Body" body=""
	I1210 07:45:10.663733  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:10.664152  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:10.664210  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:11.162858  412953 type.go:168] "Request Body" body=""
	I1210 07:45:11.162931  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:11.163204  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:11.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:45:11.662878  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:11.663220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:12.162791  412953 type.go:168] "Request Body" body=""
	I1210 07:45:12.162863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:12.163204  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:12.662891  412953 type.go:168] "Request Body" body=""
	I1210 07:45:12.662962  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:12.663257  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:12.789627  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:45:12.850283  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:12.850328  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:12.850347  412953 retry.go:31] will retry after 28.725170788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:13.162732  412953 type.go:168] "Request Body" body=""
	I1210 07:45:13.162821  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:13.163228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:13.163286  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:13.662967  412953 type.go:168] "Request Body" body=""
	I1210 07:45:13.663064  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:13.663335  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:14.162706  412953 type.go:168] "Request Body" body=""
	I1210 07:45:14.162805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:14.163117  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:14.663061  412953 type.go:168] "Request Body" body=""
	I1210 07:45:14.663142  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:14.663460  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:15.163177  412953 type.go:168] "Request Body" body=""
	I1210 07:45:15.163253  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:15.163541  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:15.163586  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:15.663389  412953 type.go:168] "Request Body" body=""
	I1210 07:45:15.663465  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:15.663723  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:16.163504  412953 type.go:168] "Request Body" body=""
	I1210 07:45:16.163583  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:16.163918  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:16.663729  412953 type.go:168] "Request Body" body=""
	I1210 07:45:16.663810  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:16.664140  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:17.162749  412953 type.go:168] "Request Body" body=""
	I1210 07:45:17.162824  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:17.163138  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:17.662762  412953 type.go:168] "Request Body" body=""
	I1210 07:45:17.662834  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:17.663192  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:17.663252  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:18.162797  412953 type.go:168] "Request Body" body=""
	I1210 07:45:18.162875  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:18.163247  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:18.662659  412953 type.go:168] "Request Body" body=""
	I1210 07:45:18.662732  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:18.663041  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:19.162730  412953 type.go:168] "Request Body" body=""
	I1210 07:45:19.162836  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:19.163176  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:19.662762  412953 type.go:168] "Request Body" body=""
	I1210 07:45:19.662843  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:19.663196  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:20.162705  412953 type.go:168] "Request Body" body=""
	I1210 07:45:20.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:20.163094  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:20.163139  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:20.662988  412953 type.go:168] "Request Body" body=""
	I1210 07:45:20.663092  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:20.663402  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:21.162784  412953 type.go:168] "Request Body" body=""
	I1210 07:45:21.162867  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:21.163180  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:21.662681  412953 type.go:168] "Request Body" body=""
	I1210 07:45:21.662754  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:21.663104  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:22.162781  412953 type.go:168] "Request Body" body=""
	I1210 07:45:22.162857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:22.163226  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:22.163284  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:22.662965  412953 type.go:168] "Request Body" body=""
	I1210 07:45:22.663056  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:22.663403  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:23.162971  412953 type.go:168] "Request Body" body=""
	I1210 07:45:23.163064  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:23.163391  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:23.663117  412953 type.go:168] "Request Body" body=""
	I1210 07:45:23.663191  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:23.663529  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:24.163373  412953 type.go:168] "Request Body" body=""
	I1210 07:45:24.163447  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:24.163793  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:24.163855  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:24.663660  412953 type.go:168] "Request Body" body=""
	I1210 07:45:24.663735  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:24.664001  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:25.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:45:25.162855  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:25.163242  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:25.662975  412953 type.go:168] "Request Body" body=""
	I1210 07:45:25.663066  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:25.663402  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:26.163074  412953 type.go:168] "Request Body" body=""
	I1210 07:45:26.163154  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:26.163422  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:26.663142  412953 type.go:168] "Request Body" body=""
	I1210 07:45:26.663225  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:26.663574  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:26.663627  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:27.163384  412953 type.go:168] "Request Body" body=""
	I1210 07:45:27.163458  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:27.163787  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:27.663537  412953 type.go:168] "Request Body" body=""
	I1210 07:45:27.663608  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:27.663914  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:27.734263  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:45:27.790479  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:27.794248  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:27.794290  412953 retry.go:31] will retry after 44.751938518s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:28.162814  412953 type.go:168] "Request Body" body=""
	I1210 07:45:28.162897  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:28.163247  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:28.662786  412953 type.go:168] "Request Body" body=""
	I1210 07:45:28.662878  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:28.663225  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:29.162904  412953 type.go:168] "Request Body" body=""
	I1210 07:45:29.162995  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:29.163308  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:29.163369  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:29.662782  412953 type.go:168] "Request Body" body=""
	I1210 07:45:29.662858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:29.663198  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:30.162922  412953 type.go:168] "Request Body" body=""
	I1210 07:45:30.162996  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:30.163332  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:30.663188  412953 type.go:168] "Request Body" body=""
	I1210 07:45:30.663257  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:30.663510  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:31.163355  412953 type.go:168] "Request Body" body=""
	I1210 07:45:31.163426  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:31.163801  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:31.163859  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:31.663621  412953 type.go:168] "Request Body" body=""
	I1210 07:45:31.663699  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:31.664023  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:32.163649  412953 type.go:168] "Request Body" body=""
	I1210 07:45:32.163722  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:32.163979  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:32.662687  412953 type.go:168] "Request Body" body=""
	I1210 07:45:32.662761  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:32.663068  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:33.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:45:33.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:33.163206  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:33.662712  412953 type.go:168] "Request Body" body=""
	I1210 07:45:33.662785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:33.663080  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:33.663132  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:34.162759  412953 type.go:168] "Request Body" body=""
	I1210 07:45:34.162838  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:34.163153  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:34.662951  412953 type.go:168] "Request Body" body=""
	I1210 07:45:34.663056  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:34.663400  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:35.162743  412953 type.go:168] "Request Body" body=""
	I1210 07:45:35.162820  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:35.163221  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:35.663143  412953 type.go:168] "Request Body" body=""
	I1210 07:45:35.663217  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:35.663541  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:35.663594  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:36.163366  412953 type.go:168] "Request Body" body=""
	I1210 07:45:36.163455  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:36.163788  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:36.663603  412953 type.go:168] "Request Body" body=""
	I1210 07:45:36.663693  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:36.664111  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:37.162779  412953 type.go:168] "Request Body" body=""
	I1210 07:45:37.162853  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:37.163211  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:37.662799  412953 type.go:168] "Request Body" body=""
	I1210 07:45:37.662884  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:37.663251  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:38.162931  412953 type.go:168] "Request Body" body=""
	I1210 07:45:38.163048  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:38.163386  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:38.163440  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:38.662746  412953 type.go:168] "Request Body" body=""
	I1210 07:45:38.662816  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:38.663178  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:39.162879  412953 type.go:168] "Request Body" body=""
	I1210 07:45:39.162950  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:39.163283  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:39.662744  412953 type.go:168] "Request Body" body=""
	I1210 07:45:39.662813  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:39.663099  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:40.162783  412953 type.go:168] "Request Body" body=""
	I1210 07:45:40.162864  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:40.163201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:40.663276  412953 type.go:168] "Request Body" body=""
	I1210 07:45:40.663359  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:40.663683  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:40.663745  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:41.163455  412953 type.go:168] "Request Body" body=""
	I1210 07:45:41.163527  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:41.163780  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:41.576476  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:45:41.640104  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:41.640160  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:41.640252  412953 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:45:41.663359  412953 type.go:168] "Request Body" body=""
	I1210 07:45:41.663436  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:41.663747  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:42.163603  412953 type.go:168] "Request Body" body=""
	I1210 07:45:42.163686  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:42.164024  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:42.663504  412953 type.go:168] "Request Body" body=""
	I1210 07:45:42.663576  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:42.663841  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:42.663882  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:43.163706  412953 type.go:168] "Request Body" body=""
	I1210 07:45:43.163778  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:43.164093  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:43.662795  412953 type.go:168] "Request Body" body=""
	I1210 07:45:43.662871  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:43.663216  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:44.163690  412953 type.go:168] "Request Body" body=""
	I1210 07:45:44.163770  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:44.164102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:44.662956  412953 type.go:168] "Request Body" body=""
	I1210 07:45:44.663053  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:44.663384  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:45.162824  412953 type.go:168] "Request Body" body=""
	I1210 07:45:45.162919  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:45.163386  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:45.163455  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:45.663325  412953 type.go:168] "Request Body" body=""
	I1210 07:45:45.663415  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:45.663778  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:46.163560  412953 type.go:168] "Request Body" body=""
	I1210 07:45:46.163643  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:46.164039  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:46.662756  412953 type.go:168] "Request Body" body=""
	I1210 07:45:46.662831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:46.663365  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:47.162707  412953 type.go:168] "Request Body" body=""
	I1210 07:45:47.162778  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:47.163063  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:47.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:45:47.662856  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:47.663225  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:47.663287  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:48.162991  412953 type.go:168] "Request Body" body=""
	I1210 07:45:48.163078  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:48.163366  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:48.662730  412953 type.go:168] "Request Body" body=""
	I1210 07:45:48.662809  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:48.663141  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:49.162823  412953 type.go:168] "Request Body" body=""
	I1210 07:45:49.162918  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:49.163244  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:49.662967  412953 type.go:168] "Request Body" body=""
	I1210 07:45:49.663054  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:49.663382  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:49.663448  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:50.162950  412953 type.go:168] "Request Body" body=""
	I1210 07:45:50.163048  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:50.163436  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:50.663184  412953 type.go:168] "Request Body" body=""
	I1210 07:45:50.663257  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:50.663570  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:51.163371  412953 type.go:168] "Request Body" body=""
	I1210 07:45:51.163453  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:51.163741  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:51.663498  412953 type.go:168] "Request Body" body=""
	I1210 07:45:51.663588  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:51.663902  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:51.663988  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:52.162782  412953 type.go:168] "Request Body" body=""
	I1210 07:45:52.162900  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:52.163274  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:52.663034  412953 type.go:168] "Request Body" body=""
	I1210 07:45:52.663107  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:52.663447  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:53.163126  412953 type.go:168] "Request Body" body=""
	I1210 07:45:53.163192  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:53.163480  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:53.663263  412953 type.go:168] "Request Body" body=""
	I1210 07:45:53.663337  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:53.663619  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:54.163395  412953 type.go:168] "Request Body" body=""
	I1210 07:45:54.163472  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:54.163802  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:54.163854  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:54.663613  412953 type.go:168] "Request Body" body=""
	I1210 07:45:54.663694  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:54.664023  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:55.163717  412953 type.go:168] "Request Body" body=""
	I1210 07:45:55.163799  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:55.164118  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:55.662998  412953 type.go:168] "Request Body" body=""
	I1210 07:45:55.663091  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:55.663450  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:56.163121  412953 type.go:168] "Request Body" body=""
	I1210 07:45:56.163190  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:56.163520  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:56.663288  412953 type.go:168] "Request Body" body=""
	I1210 07:45:56.663383  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:56.663710  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:56.663763  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:57.163429  412953 type.go:168] "Request Body" body=""
	I1210 07:45:57.163515  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:57.163853  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:57.663634  412953 type.go:168] "Request Body" body=""
	I1210 07:45:57.663715  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:57.664039  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:58.162743  412953 type.go:168] "Request Body" body=""
	I1210 07:45:58.162822  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:58.163189  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:58.662924  412953 type.go:168] "Request Body" body=""
	I1210 07:45:58.663036  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:58.663358  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:59.162715  412953 type.go:168] "Request Body" body=""
	I1210 07:45:59.162781  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:59.163074  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:59.163122  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:59.662789  412953 type.go:168] "Request Body" body=""
	I1210 07:45:59.662865  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:59.663251  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:00.162998  412953 type.go:168] "Request Body" body=""
	I1210 07:46:00.163123  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:00.163461  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:00.663540  412953 type.go:168] "Request Body" body=""
	I1210 07:46:00.663611  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:00.663865  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:01.163683  412953 type.go:168] "Request Body" body=""
	I1210 07:46:01.163757  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:01.164109  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:01.164167  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:01.662851  412953 type.go:168] "Request Body" body=""
	I1210 07:46:01.662929  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:01.663282  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:02.162953  412953 type.go:168] "Request Body" body=""
	I1210 07:46:02.163054  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:02.163311  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:02.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:46:02.662862  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:02.663241  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:03.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:46:03.162854  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:03.163220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:03.662701  412953 type.go:168] "Request Body" body=""
	I1210 07:46:03.662776  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:03.663100  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:03.663150  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:04.162777  412953 type.go:168] "Request Body" body=""
	I1210 07:46:04.162853  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:04.163234  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:04.663099  412953 type.go:168] "Request Body" body=""
	I1210 07:46:04.663184  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:04.663539  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:05.163328  412953 type.go:168] "Request Body" body=""
	I1210 07:46:05.163396  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:05.163668  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:05.663706  412953 type.go:168] "Request Body" body=""
	I1210 07:46:05.663785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:05.664109  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:05.664167  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:06.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:46:06.162864  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:06.163213  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:06.662683  412953 type.go:168] "Request Body" body=""
	I1210 07:46:06.662757  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:06.663110  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:07.162833  412953 type.go:168] "Request Body" body=""
	I1210 07:46:07.162915  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:07.163278  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:07.663001  412953 type.go:168] "Request Body" body=""
	I1210 07:46:07.663101  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:07.663452  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:08.163105  412953 type.go:168] "Request Body" body=""
	I1210 07:46:08.163173  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:08.163505  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:08.163551  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:08.663246  412953 type.go:168] "Request Body" body=""
	I1210 07:46:08.663355  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:08.663696  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:09.163360  412953 type.go:168] "Request Body" body=""
	I1210 07:46:09.163436  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:09.163764  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:09.663545  412953 type.go:168] "Request Body" body=""
	I1210 07:46:09.663613  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:09.663865  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:10.163582  412953 type.go:168] "Request Body" body=""
	I1210 07:46:10.163660  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:10.164166  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:10.164222  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:10.662978  412953 type.go:168] "Request Body" body=""
	I1210 07:46:10.663066  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:10.663429  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:11.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:46:11.162791  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:11.163072  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:11.662759  412953 type.go:168] "Request Body" body=""
	I1210 07:46:11.662839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:11.663211  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:12.162917  412953 type.go:168] "Request Body" body=""
	I1210 07:46:12.162995  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:12.163357  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:12.546948  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:46:12.609717  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:46:12.609753  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:46:12.609836  412953 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:46:12.614848  412953 out.go:179] * Enabled addons: 
	I1210 07:46:12.617540  412953 addons.go:530] duration metric: took 1m57.960111858s for enable addons: enabled=[]
	I1210 07:46:12.662919  412953 type.go:168] "Request Body" body=""
	I1210 07:46:12.663005  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:12.663288  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:12.663340  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:13.162812  412953 type.go:168] "Request Body" body=""
	I1210 07:46:13.162891  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:13.163241  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:13.662827  412953 type.go:168] "Request Body" body=""
	I1210 07:46:13.662909  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:13.663262  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:14.162721  412953 type.go:168] "Request Body" body=""
	I1210 07:46:14.162802  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:14.163126  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:14.663062  412953 type.go:168] "Request Body" body=""
	I1210 07:46:14.663130  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:14.663461  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:14.663518  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:15.163082  412953 type.go:168] "Request Body" body=""
	I1210 07:46:15.163169  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:15.163586  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:15.662688  412953 type.go:168] "Request Body" body=""
	I1210 07:46:15.662765  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:15.663046  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:16.162762  412953 type.go:168] "Request Body" body=""
	I1210 07:46:16.162843  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:16.163210  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:16.662984  412953 type.go:168] "Request Body" body=""
	I1210 07:46:16.663080  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:16.663419  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:17.162974  412953 type.go:168] "Request Body" body=""
	I1210 07:46:17.163061  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:17.163322  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:17.163365  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:17.663052  412953 type.go:168] "Request Body" body=""
	I1210 07:46:17.663135  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:17.663464  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:18.163305  412953 type.go:168] "Request Body" body=""
	I1210 07:46:18.163382  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:18.163729  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:18.663345  412953 type.go:168] "Request Body" body=""
	I1210 07:46:18.663418  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:18.663680  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:19.163480  412953 type.go:168] "Request Body" body=""
	I1210 07:46:19.163553  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:19.163943  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:19.163995  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:19.663628  412953 type.go:168] "Request Body" body=""
	I1210 07:46:19.663701  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:19.664020  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:20.162792  412953 type.go:168] "Request Body" body=""
	I1210 07:46:20.162868  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:20.163216  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:20.662994  412953 type.go:168] "Request Body" body=""
	I1210 07:46:20.663084  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:20.663424  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:21.162961  412953 type.go:168] "Request Body" body=""
	I1210 07:46:21.163061  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:21.163356  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:21.662751  412953 type.go:168] "Request Body" body=""
	I1210 07:46:21.662832  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:21.663141  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:21.663198  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:22.162889  412953 type.go:168] "Request Body" body=""
	I1210 07:46:22.162971  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:22.163363  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:22.663102  412953 type.go:168] "Request Body" body=""
	I1210 07:46:22.663178  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:22.663485  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:23.163205  412953 type.go:168] "Request Body" body=""
	I1210 07:46:23.163277  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:23.163529  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:23.663272  412953 type.go:168] "Request Body" body=""
	I1210 07:46:23.663341  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:23.663672  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:23.663725  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:24.163522  412953 type.go:168] "Request Body" body=""
	I1210 07:46:24.163596  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:24.163927  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:24.662937  412953 type.go:168] "Request Body" body=""
	I1210 07:46:24.663007  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:24.663348  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:25.162766  412953 type.go:168] "Request Body" body=""
	I1210 07:46:25.162842  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:25.163192  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:25.662762  412953 type.go:168] "Request Body" body=""
	I1210 07:46:25.662836  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:25.663200  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:26.162716  412953 type.go:168] "Request Body" body=""
	I1210 07:46:26.162788  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:26.163132  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:26.163196  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:26.662779  412953 type.go:168] "Request Body" body=""
	I1210 07:46:26.662852  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:26.663214  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:27.162966  412953 type.go:168] "Request Body" body=""
	I1210 07:46:27.163059  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:27.163400  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:27.663107  412953 type.go:168] "Request Body" body=""
	I1210 07:46:27.663178  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:27.663429  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:28.162791  412953 type.go:168] "Request Body" body=""
	I1210 07:46:28.162875  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:28.163237  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:28.163291  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:28.662824  412953 type.go:168] "Request Body" body=""
	I1210 07:46:28.662900  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:28.663270  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:29.162964  412953 type.go:168] "Request Body" body=""
	I1210 07:46:29.163094  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:29.163427  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:29.662778  412953 type.go:168] "Request Body" body=""
	I1210 07:46:29.662855  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:29.663206  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:30.162949  412953 type.go:168] "Request Body" body=""
	I1210 07:46:30.163043  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:30.163442  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:30.163519  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:30.663178  412953 type.go:168] "Request Body" body=""
	I1210 07:46:30.663251  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:30.663507  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:31.162779  412953 type.go:168] "Request Body" body=""
	I1210 07:46:31.162861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:31.163241  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:31.662779  412953 type.go:168] "Request Body" body=""
	I1210 07:46:31.662863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:31.663242  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:32.163585  412953 type.go:168] "Request Body" body=""
	I1210 07:46:32.163652  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:32.163904  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:32.163946  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:32.663710  412953 type.go:168] "Request Body" body=""
	I1210 07:46:32.663785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:32.664115  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:33.162721  412953 type.go:168] "Request Body" body=""
	I1210 07:46:33.162805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:33.163126  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:33.662701  412953 type.go:168] "Request Body" body=""
	I1210 07:46:33.662773  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:33.663075  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:34.162808  412953 type.go:168] "Request Body" body=""
	I1210 07:46:34.162883  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:34.163215  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:34.663030  412953 type.go:168] "Request Body" body=""
	I1210 07:46:34.663135  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:34.663479  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:34.663544  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:35.163296  412953 type.go:168] "Request Body" body=""
	I1210 07:46:35.163366  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:35.163632  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:35.663541  412953 type.go:168] "Request Body" body=""
	I1210 07:46:35.663615  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:35.663930  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:36.162688  412953 type.go:168] "Request Body" body=""
	I1210 07:46:36.162762  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:36.163076  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:36.662753  412953 type.go:168] "Request Body" body=""
	I1210 07:46:36.662826  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:36.663102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:37.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:46:37.162863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:37.163224  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:37.163284  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:37.662950  412953 type.go:168] "Request Body" body=""
	I1210 07:46:37.663050  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:37.663385  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:38.163090  412953 type.go:168] "Request Body" body=""
	I1210 07:46:38.163159  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:38.163458  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:38.662777  412953 type.go:168] "Request Body" body=""
	I1210 07:46:38.662848  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:38.663155  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:39.162756  412953 type.go:168] "Request Body" body=""
	I1210 07:46:39.162851  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:39.163199  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:39.662727  412953 type.go:168] "Request Body" body=""
	I1210 07:46:39.662805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:39.663239  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:39.663299  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:40.162968  412953 type.go:168] "Request Body" body=""
	I1210 07:46:40.163066  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:40.163428  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:40.663175  412953 type.go:168] "Request Body" body=""
	I1210 07:46:40.663247  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:40.663570  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:41.163330  412953 type.go:168] "Request Body" body=""
	I1210 07:46:41.163398  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:41.163670  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:41.663442  412953 type.go:168] "Request Body" body=""
	I1210 07:46:41.663514  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:41.663828  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:41.663885  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:42.163612  412953 type.go:168] "Request Body" body=""
	I1210 07:46:42.163698  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:42.164038  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:42.662671  412953 type.go:168] "Request Body" body=""
	I1210 07:46:42.662751  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:42.663040  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:43.162772  412953 type.go:168] "Request Body" body=""
	I1210 07:46:43.162852  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:43.163223  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:43.662929  412953 type.go:168] "Request Body" body=""
	I1210 07:46:43.663035  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:43.663349  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:44.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:46:44.162784  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:44.163074  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:44.163124  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:44.663067  412953 type.go:168] "Request Body" body=""
	I1210 07:46:44.663140  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:44.663477  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:45.163353  412953 type.go:168] "Request Body" body=""
	I1210 07:46:45.163451  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:45.163846  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:45.662750  412953 type.go:168] "Request Body" body=""
	I1210 07:46:45.662825  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:45.663122  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:46.162781  412953 type.go:168] "Request Body" body=""
	I1210 07:46:46.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:46.163209  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:46.163281  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:46.662789  412953 type.go:168] "Request Body" body=""
	I1210 07:46:46.662869  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:46.663244  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:47.162966  412953 type.go:168] "Request Body" body=""
	I1210 07:46:47.163051  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:47.163320  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:47.662754  412953 type.go:168] "Request Body" body=""
	I1210 07:46:47.662828  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:47.663189  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:48.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:46:48.162860  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:48.163220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:48.662721  412953 type.go:168] "Request Body" body=""
	I1210 07:46:48.662793  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:48.663100  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:48.663152  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:49.162765  412953 type.go:168] "Request Body" body=""
	I1210 07:46:49.162862  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:49.163255  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:49.662773  412953 type.go:168] "Request Body" body=""
	I1210 07:46:49.662850  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:49.663184  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:50.162728  412953 type.go:168] "Request Body" body=""
	I1210 07:46:50.162800  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:50.163110  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:50.662840  412953 type.go:168] "Request Body" body=""
	I1210 07:46:50.662940  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:50.663293  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:50.663354  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:51.163080  412953 type.go:168] "Request Body" body=""
	I1210 07:46:51.163164  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:51.163485  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:51.663128  412953 type.go:168] "Request Body" body=""
	I1210 07:46:51.663202  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:51.663559  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:52.163395  412953 type.go:168] "Request Body" body=""
	I1210 07:46:52.163475  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:52.163785  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:52.663590  412953 type.go:168] "Request Body" body=""
	I1210 07:46:52.663677  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:52.664017  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:52.664072  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:53.162668  412953 type.go:168] "Request Body" body=""
	I1210 07:46:53.162748  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:53.163064  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:53.662759  412953 type.go:168] "Request Body" body=""
	I1210 07:46:53.662839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:53.663165  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:54.162852  412953 type.go:168] "Request Body" body=""
	I1210 07:46:54.162929  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:54.163260  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:54.663174  412953 type.go:168] "Request Body" body=""
	I1210 07:46:54.663244  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:54.663519  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:55.163370  412953 type.go:168] "Request Body" body=""
	I1210 07:46:55.163453  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:55.163790  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:55.163840  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:55.663591  412953 type.go:168] "Request Body" body=""
	I1210 07:46:55.663675  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:55.664032  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:56.162718  412953 type.go:168] "Request Body" body=""
	I1210 07:46:56.162787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:56.163127  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:56.662776  412953 type.go:168] "Request Body" body=""
	I1210 07:46:56.662857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:56.663223  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:57.162920  412953 type.go:168] "Request Body" body=""
	I1210 07:46:57.162997  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:57.163350  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:57.663044  412953 type.go:168] "Request Body" body=""
	I1210 07:46:57.663119  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:57.663437  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:57.663491  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:58.162764  412953 type.go:168] "Request Body" body=""
	I1210 07:46:58.162842  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:58.163207  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:58.662916  412953 type.go:168] "Request Body" body=""
	I1210 07:46:58.662998  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:58.663346  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:59.162707  412953 type.go:168] "Request Body" body=""
	I1210 07:46:59.162786  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:59.163086  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:59.662772  412953 type.go:168] "Request Body" body=""
	I1210 07:46:59.662861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:59.663202  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:00.162842  412953 type.go:168] "Request Body" body=""
	I1210 07:47:00.162937  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:00.163453  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:00.163526  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:00.663007  412953 type.go:168] "Request Body" body=""
	I1210 07:47:00.663098  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:00.663417  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:01.162776  412953 type.go:168] "Request Body" body=""
	I1210 07:47:01.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:01.163244  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:01.662972  412953 type.go:168] "Request Body" body=""
	I1210 07:47:01.663073  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:01.663435  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:02.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:47:02.162838  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:02.163124  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:02.662798  412953 type.go:168] "Request Body" body=""
	I1210 07:47:02.662880  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:02.663234  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:02.663292  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:03.162956  412953 type.go:168] "Request Body" body=""
	I1210 07:47:03.163056  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:03.163409  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:03.663107  412953 type.go:168] "Request Body" body=""
	I1210 07:47:03.663180  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:03.663591  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:04.163359  412953 type.go:168] "Request Body" body=""
	I1210 07:47:04.163439  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:04.163771  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:04.662805  412953 type.go:168] "Request Body" body=""
	I1210 07:47:04.662883  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:04.663237  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:05.162670  412953 type.go:168] "Request Body" body=""
	I1210 07:47:05.162740  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:05.163054  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:05.163104  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:05.662994  412953 type.go:168] "Request Body" body=""
	I1210 07:47:05.663093  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:05.663427  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:06.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:47:06.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:06.163239  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:06.662861  412953 type.go:168] "Request Body" body=""
	I1210 07:47:06.662927  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:06.663190  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:07.162855  412953 type.go:168] "Request Body" body=""
	I1210 07:47:07.162985  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:07.163313  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:07.163372  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:07.662758  412953 type.go:168] "Request Body" body=""
	I1210 07:47:07.662832  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:07.663175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:08.162737  412953 type.go:168] "Request Body" body=""
	I1210 07:47:08.162808  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:08.163134  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:08.662744  412953 type.go:168] "Request Body" body=""
	I1210 07:47:08.662821  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:08.663156  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:09.162866  412953 type.go:168] "Request Body" body=""
	I1210 07:47:09.162942  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:09.163277  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:09.662758  412953 type.go:168] "Request Body" body=""
	I1210 07:47:09.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:09.663155  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:09.663202  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:10.162922  412953 type.go:168] "Request Body" body=""
	I1210 07:47:10.163005  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:10.163368  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:10.662972  412953 type.go:168] "Request Body" body=""
	I1210 07:47:10.663067  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:10.663391  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:11.163075  412953 type.go:168] "Request Body" body=""
	I1210 07:47:11.163169  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:11.163529  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:11.663298  412953 type.go:168] "Request Body" body=""
	I1210 07:47:11.663381  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:11.663711  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:11.663770  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:12.163545  412953 type.go:168] "Request Body" body=""
	I1210 07:47:12.163626  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:12.163962  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:12.663589  412953 type.go:168] "Request Body" body=""
	I1210 07:47:12.663663  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:12.663928  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:13.163691  412953 type.go:168] "Request Body" body=""
	I1210 07:47:13.163763  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:13.164095  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:13.662728  412953 type.go:168] "Request Body" body=""
	I1210 07:47:13.662802  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:13.663152  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:14.162755  412953 type.go:168] "Request Body" body=""
	I1210 07:47:14.162827  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:14.163128  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:14.163174  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:14.663040  412953 type.go:168] "Request Body" body=""
	I1210 07:47:14.663122  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:14.663453  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:15.163172  412953 type.go:168] "Request Body" body=""
	I1210 07:47:15.163245  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:15.163583  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:15.663556  412953 type.go:168] "Request Body" body=""
	I1210 07:47:15.663626  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:15.663897  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:16.162681  412953 type.go:168] "Request Body" body=""
	I1210 07:47:16.162760  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:16.163086  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:16.662812  412953 type.go:168] "Request Body" body=""
	I1210 07:47:16.662888  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:16.663240  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:16.663299  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:17.162710  412953 type.go:168] "Request Body" body=""
	I1210 07:47:17.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:17.163065  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:17.662764  412953 type.go:168] "Request Body" body=""
	I1210 07:47:17.662843  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:17.663184  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:18.162904  412953 type.go:168] "Request Body" body=""
	I1210 07:47:18.162985  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:18.163341  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:18.662709  412953 type.go:168] "Request Body" body=""
	I1210 07:47:18.662779  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:18.663072  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:19.162752  412953 type.go:168] "Request Body" body=""
	I1210 07:47:19.162825  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:19.163160  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:19.163217  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:19.662756  412953 type.go:168] "Request Body" body=""
	I1210 07:47:19.662831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:19.663169  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:20.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:47:20.162785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:20.163102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:20.662785  412953 type.go:168] "Request Body" body=""
	I1210 07:47:20.662863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:20.663210  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:21.162946  412953 type.go:168] "Request Body" body=""
	I1210 07:47:21.163041  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:21.163356  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:21.163416  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:21.662705  412953 type.go:168] "Request Body" body=""
	I1210 07:47:21.662777  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:21.663055  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:22.162766  412953 type.go:168] "Request Body" body=""
	I1210 07:47:22.162839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:22.163199  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:22.662780  412953 type.go:168] "Request Body" body=""
	I1210 07:47:22.662854  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:22.663212  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:23.163429  412953 type.go:168] "Request Body" body=""
	I1210 07:47:23.163503  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:23.163746  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:23.163785  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:23.663518  412953 type.go:168] "Request Body" body=""
	I1210 07:47:23.663593  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:23.663930  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:24.163583  412953 type.go:168] "Request Body" body=""
	I1210 07:47:24.163661  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:24.164012  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:24.662895  412953 type.go:168] "Request Body" body=""
	I1210 07:47:24.662963  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:24.663238  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:25.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:47:25.162856  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:25.163189  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:25.662776  412953 type.go:168] "Request Body" body=""
	I1210 07:47:25.662854  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:25.663185  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:25.663235  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:26.162705  412953 type.go:168] "Request Body" body=""
	I1210 07:47:26.162780  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:26.163095  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:26.662818  412953 type.go:168] "Request Body" body=""
	I1210 07:47:26.662889  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:26.663264  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:27.162833  412953 type.go:168] "Request Body" body=""
	I1210 07:47:27.162913  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:27.163219  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:27.662705  412953 type.go:168] "Request Body" body=""
	I1210 07:47:27.662774  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:27.663040  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:28.162741  412953 type.go:168] "Request Body" body=""
	I1210 07:47:28.162822  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:28.163180  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:28.163235  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:28.662765  412953 type.go:168] "Request Body" body=""
	I1210 07:47:28.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:28.663191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:29.163595  412953 type.go:168] "Request Body" body=""
	I1210 07:47:29.163681  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:29.163938  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:29.663716  412953 type.go:168] "Request Body" body=""
	I1210 07:47:29.663791  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:29.664120  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:30.162802  412953 type.go:168] "Request Body" body=""
	I1210 07:47:30.162881  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:30.163228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:30.163277  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:30.662975  412953 type.go:168] "Request Body" body=""
	I1210 07:47:30.663060  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:30.663309  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:31.162750  412953 type.go:168] "Request Body" body=""
	I1210 07:47:31.162823  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:31.163180  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:31.662903  412953 type.go:168] "Request Body" body=""
	I1210 07:47:31.662983  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:31.663348  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:32.163059  412953 type.go:168] "Request Body" body=""
	I1210 07:47:32.163151  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:32.163486  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:32.163538  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:32.663271  412953 type.go:168] "Request Body" body=""
	I1210 07:47:32.663354  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:32.663709  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:33.163521  412953 type.go:168] "Request Body" body=""
	I1210 07:47:33.163606  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:33.163923  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:33.663552  412953 type.go:168] "Request Body" body=""
	I1210 07:47:33.663621  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:33.663890  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:34.162672  412953 type.go:168] "Request Body" body=""
	I1210 07:47:34.162756  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:34.163144  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:34.662973  412953 type.go:168] "Request Body" body=""
	I1210 07:47:34.663067  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:34.663388  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:34.663446  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:35.163091  412953 type.go:168] "Request Body" body=""
	I1210 07:47:35.163163  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:35.163426  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:35.663452  412953 type.go:168] "Request Body" body=""
	I1210 07:47:35.663527  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:35.663843  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:36.163648  412953 type.go:168] "Request Body" body=""
	I1210 07:47:36.163726  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:36.164083  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:36.662824  412953 type.go:168] "Request Body" body=""
	I1210 07:47:36.662911  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:36.663202  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:37.162888  412953 type.go:168] "Request Body" body=""
	I1210 07:47:37.162961  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:37.163292  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:37.163351  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:37.663060  412953 type.go:168] "Request Body" body=""
	I1210 07:47:37.663139  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:37.663481  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:38.163223  412953 type.go:168] "Request Body" body=""
	I1210 07:47:38.163297  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:38.163629  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:38.663452  412953 type.go:168] "Request Body" body=""
	I1210 07:47:38.663533  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:38.663884  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:39.163604  412953 type.go:168] "Request Body" body=""
	I1210 07:47:39.163682  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:39.164033  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:39.164085  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:39.662709  412953 type.go:168] "Request Body" body=""
	I1210 07:47:39.662798  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:39.663102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:40.162773  412953 type.go:168] "Request Body" body=""
	I1210 07:47:40.162850  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:40.163206  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:40.663034  412953 type.go:168] "Request Body" body=""
	I1210 07:47:40.663108  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:40.663451  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:41.162787  412953 type.go:168] "Request Body" body=""
	I1210 07:47:41.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:41.163166  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:41.662816  412953 type.go:168] "Request Body" body=""
	I1210 07:47:41.662924  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:41.663355  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:41.663428  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:42.162833  412953 type.go:168] "Request Body" body=""
	I1210 07:47:42.162931  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:42.163383  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:42.662719  412953 type.go:168] "Request Body" body=""
	I1210 07:47:42.662787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:42.663059  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:43.162773  412953 type.go:168] "Request Body" body=""
	I1210 07:47:43.162848  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:43.163226  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:43.662792  412953 type.go:168] "Request Body" body=""
	I1210 07:47:43.662865  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:43.663241  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:44.162732  412953 type.go:168] "Request Body" body=""
	I1210 07:47:44.162801  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:44.163080  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:44.163121  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:44.663044  412953 type.go:168] "Request Body" body=""
	I1210 07:47:44.663116  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:44.663433  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:45.163203  412953 type.go:168] "Request Body" body=""
	I1210 07:47:45.163296  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:45.163726  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:45.662709  412953 type.go:168] "Request Body" body=""
	I1210 07:47:45.662794  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:45.663175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:46.162894  412953 type.go:168] "Request Body" body=""
	I1210 07:47:46.162972  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:46.163291  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:46.163350  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:46.662780  412953 type.go:168] "Request Body" body=""
	I1210 07:47:46.662853  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:46.663154  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:47.162673  412953 type.go:168] "Request Body" body=""
	I1210 07:47:47.162741  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:47.163000  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:47.662781  412953 type.go:168] "Request Body" body=""
	I1210 07:47:47.662882  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:47.663282  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:48.163055  412953 type.go:168] "Request Body" body=""
	I1210 07:47:48.163152  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:48.163450  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:48.163501  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:48.663148  412953 type.go:168] "Request Body" body=""
	I1210 07:47:48.663224  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:48.663521  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:49.163373  412953 type.go:168] "Request Body" body=""
	I1210 07:47:49.163457  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:49.163828  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:49.663672  412953 type.go:168] "Request Body" body=""
	I1210 07:47:49.663767  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:49.664111  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:50.163649  412953 type.go:168] "Request Body" body=""
	I1210 07:47:50.163722  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:50.163974  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:50.164024  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:50.662959  412953 type.go:168] "Request Body" body=""
	I1210 07:47:50.663053  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:50.663390  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:51.163109  412953 type.go:168] "Request Body" body=""
	I1210 07:47:51.163226  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:51.163573  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:51.663350  412953 type.go:168] "Request Body" body=""
	I1210 07:47:51.663424  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:51.663681  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:52.163449  412953 type.go:168] "Request Body" body=""
	I1210 07:47:52.163526  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:52.163814  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:52.663598  412953 type.go:168] "Request Body" body=""
	I1210 07:47:52.663677  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:52.664036  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:52.664093  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:53.162754  412953 type.go:168] "Request Body" body=""
	I1210 07:47:53.162833  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:53.163184  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:53.662766  412953 type.go:168] "Request Body" body=""
	I1210 07:47:53.662839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:53.663209  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:54.162947  412953 type.go:168] "Request Body" body=""
	I1210 07:47:54.163051  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:54.163402  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:54.663120  412953 type.go:168] "Request Body" body=""
	I1210 07:47:54.663194  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:54.663486  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:55.163337  412953 type.go:168] "Request Body" body=""
	I1210 07:47:55.163413  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:55.163797  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:55.163853  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:55.663646  412953 type.go:168] "Request Body" body=""
	I1210 07:47:55.663720  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:55.664034  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:56.162701  412953 type.go:168] "Request Body" body=""
	I1210 07:47:56.162774  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:56.163098  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:56.662775  412953 type.go:168] "Request Body" body=""
	I1210 07:47:56.662859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:56.663186  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:57.162786  412953 type.go:168] "Request Body" body=""
	I1210 07:47:57.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:57.163228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:57.663510  412953 type.go:168] "Request Body" body=""
	I1210 07:47:57.663581  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:57.663872  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:57.663929  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:58.163725  412953 type.go:168] "Request Body" body=""
	I1210 07:47:58.163813  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:58.164168  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:58.662866  412953 type.go:168] "Request Body" body=""
	I1210 07:47:58.662937  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:58.663291  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:59.163430  412953 type.go:168] "Request Body" body=""
	I1210 07:47:59.163502  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:59.163755  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:59.663519  412953 type.go:168] "Request Body" body=""
	I1210 07:47:59.663597  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:59.663931  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:59.663987  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:00.163719  412953 type.go:168] "Request Body" body=""
	I1210 07:48:00.163813  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:00.164225  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:00.663315  412953 type.go:168] "Request Body" body=""
	I1210 07:48:00.663399  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:00.663675  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:01.163489  412953 type.go:168] "Request Body" body=""
	I1210 07:48:01.163567  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:01.163893  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:01.663702  412953 type.go:168] "Request Body" body=""
	I1210 07:48:01.663781  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:01.664135  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:01.664198  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:02.162817  412953 type.go:168] "Request Body" body=""
	I1210 07:48:02.162886  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:02.163184  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:02.662884  412953 type.go:168] "Request Body" body=""
	I1210 07:48:02.662971  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:02.663338  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:03.162784  412953 type.go:168] "Request Body" body=""
	I1210 07:48:03.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:03.163213  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:03.662759  412953 type.go:168] "Request Body" body=""
	I1210 07:48:03.662836  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:03.663120  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:04.162778  412953 type.go:168] "Request Body" body=""
	I1210 07:48:04.162852  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:04.163191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:04.163249  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:04.663059  412953 type.go:168] "Request Body" body=""
	I1210 07:48:04.663141  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:04.663463  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:05.162715  412953 type.go:168] "Request Body" body=""
	I1210 07:48:05.162795  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:05.163131  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:05.663057  412953 type.go:168] "Request Body" body=""
	I1210 07:48:05.663143  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:05.663537  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:06.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:48:06.162864  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:06.163194  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:06.662713  412953 type.go:168] "Request Body" body=""
	I1210 07:48:06.662786  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:06.663074  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:06.663118  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:07.162740  412953 type.go:168] "Request Body" body=""
	I1210 07:48:07.162814  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:07.163183  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:07.662797  412953 type.go:168] "Request Body" body=""
	I1210 07:48:07.662880  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:07.663236  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:08.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:48:08.162871  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:08.163193  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:08.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:48:08.662859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:08.663191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:08.663248  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:09.162797  412953 type.go:168] "Request Body" body=""
	I1210 07:48:09.162876  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:09.163240  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:09.662795  412953 type.go:168] "Request Body" body=""
	I1210 07:48:09.662869  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:09.663133  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:10.162749  412953 type.go:168] "Request Body" body=""
	I1210 07:48:10.162821  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:10.163157  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:10.662788  412953 type.go:168] "Request Body" body=""
	I1210 07:48:10.662861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:10.663447  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:10.663517  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:11.163125  412953 type.go:168] "Request Body" body=""
	I1210 07:48:11.163197  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:11.163452  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:11.663284  412953 type.go:168] "Request Body" body=""
	I1210 07:48:11.663356  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:11.663683  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:12.163495  412953 type.go:168] "Request Body" body=""
	I1210 07:48:12.163578  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:12.163924  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:12.663702  412953 type.go:168] "Request Body" body=""
	I1210 07:48:12.663773  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:12.664091  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:12.664142  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:13.162781  412953 type.go:168] "Request Body" body=""
	I1210 07:48:13.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:13.163174  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:13.662779  412953 type.go:168] "Request Body" body=""
	I1210 07:48:13.662857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:13.663203  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:14.162879  412953 type.go:168] "Request Body" body=""
	I1210 07:48:14.162951  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:14.163288  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:14.663062  412953 type.go:168] "Request Body" body=""
	I1210 07:48:14.663137  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:14.663477  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:15.163290  412953 type.go:168] "Request Body" body=""
	I1210 07:48:15.163371  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:15.163703  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:15.163759  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:15.662995  412953 type.go:168] "Request Body" body=""
	I1210 07:48:15.663090  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:15.663504  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:16.163293  412953 type.go:168] "Request Body" body=""
	I1210 07:48:16.163371  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:16.163704  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:16.663391  412953 type.go:168] "Request Body" body=""
	I1210 07:48:16.663468  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:16.663824  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:17.163573  412953 type.go:168] "Request Body" body=""
	I1210 07:48:17.163645  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:17.163900  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:17.163949  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:17.662654  412953 type.go:168] "Request Body" body=""
	I1210 07:48:17.662730  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:17.663090  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:18.162799  412953 type.go:168] "Request Body" body=""
	I1210 07:48:18.162871  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:18.163220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:18.662707  412953 type.go:168] "Request Body" body=""
	I1210 07:48:18.662777  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:18.663073  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:19.162733  412953 type.go:168] "Request Body" body=""
	I1210 07:48:19.162808  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:19.163121  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:19.662805  412953 type.go:168] "Request Body" body=""
	I1210 07:48:19.662883  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:19.663227  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:19.663286  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:20.162738  412953 type.go:168] "Request Body" body=""
	I1210 07:48:20.162812  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:20.163160  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:20.662944  412953 type.go:168] "Request Body" body=""
	I1210 07:48:20.663033  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:20.663345  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:21.162798  412953 type.go:168] "Request Body" body=""
	I1210 07:48:21.162875  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:21.163231  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:21.662783  412953 type.go:168] "Request Body" body=""
	I1210 07:48:21.662865  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:21.663168  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:22.162769  412953 type.go:168] "Request Body" body=""
	I1210 07:48:22.162847  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:22.163187  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:22.163245  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:22.662786  412953 type.go:168] "Request Body" body=""
	I1210 07:48:22.662860  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:22.663219  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:23.162659  412953 type.go:168] "Request Body" body=""
	I1210 07:48:23.162735  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:23.163055  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:23.662748  412953 type.go:168] "Request Body" body=""
	I1210 07:48:23.662823  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:23.663195  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:24.162905  412953 type.go:168] "Request Body" body=""
	I1210 07:48:24.162979  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:24.163329  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:24.163388  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:24.662665  412953 type.go:168] "Request Body" body=""
	I1210 07:48:24.662746  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:24.662999  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:25.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:48:25.162867  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:25.163229  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:25.662989  412953 type.go:168] "Request Body" body=""
	I1210 07:48:25.663082  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:25.663400  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:26.162934  412953 type.go:168] "Request Body" body=""
	I1210 07:48:26.163003  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:26.163282  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:26.662757  412953 type.go:168] "Request Body" body=""
	I1210 07:48:26.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:26.663211  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:26.663268  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:27.162792  412953 type.go:168] "Request Body" body=""
	I1210 07:48:27.162873  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:27.163238  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:27.662787  412953 type.go:168] "Request Body" body=""
	I1210 07:48:27.662857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:27.663143  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:28.162822  412953 type.go:168] "Request Body" body=""
	I1210 07:48:28.162896  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:28.163266  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:28.662851  412953 type.go:168] "Request Body" body=""
	I1210 07:48:28.662926  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:28.663274  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:28.663332  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:29.162716  412953 type.go:168] "Request Body" body=""
	I1210 07:48:29.162788  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:29.163115  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:29.662940  412953 type.go:168] "Request Body" body=""
	I1210 07:48:29.663035  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:29.663342  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:30.162806  412953 type.go:168] "Request Body" body=""
	I1210 07:48:30.162885  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:30.163215  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:30.663057  412953 type.go:168] "Request Body" body=""
	I1210 07:48:30.663131  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:30.663453  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:30.663521  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:31.162771  412953 type.go:168] "Request Body" body=""
	I1210 07:48:31.162848  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:31.163188  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:31.662737  412953 type.go:168] "Request Body" body=""
	I1210 07:48:31.662810  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:31.663134  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:32.162725  412953 type.go:168] "Request Body" body=""
	I1210 07:48:32.162795  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:32.163156  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:32.662781  412953 type.go:168] "Request Body" body=""
	I1210 07:48:32.662857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:32.663205  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:33.162800  412953 type.go:168] "Request Body" body=""
	I1210 07:48:33.162878  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:33.163211  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:33.163260  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:33.662739  412953 type.go:168] "Request Body" body=""
	I1210 07:48:33.662809  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:33.663092  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:34.162756  412953 type.go:168] "Request Body" body=""
	I1210 07:48:34.162829  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:34.163174  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:34.663106  412953 type.go:168] "Request Body" body=""
	I1210 07:48:34.663176  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:34.663513  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:35.163270  412953 type.go:168] "Request Body" body=""
	I1210 07:48:35.163336  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:35.163605  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:35.163645  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:35.663599  412953 type.go:168] "Request Body" body=""
	I1210 07:48:35.663677  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:35.663980  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:36.162698  412953 type.go:168] "Request Body" body=""
	I1210 07:48:36.162778  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:36.163088  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:36.662771  412953 type.go:168] "Request Body" body=""
	I1210 07:48:36.662845  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:36.663185  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:37.162797  412953 type.go:168] "Request Body" body=""
	I1210 07:48:37.162873  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:37.163228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:37.662944  412953 type.go:168] "Request Body" body=""
	I1210 07:48:37.663046  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:37.663363  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:37.663421  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:38.162719  412953 type.go:168] "Request Body" body=""
	I1210 07:48:38.162806  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:38.163087  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:38.662798  412953 type.go:168] "Request Body" body=""
	I1210 07:48:38.662885  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:38.663268  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:39.162981  412953 type.go:168] "Request Body" body=""
	I1210 07:48:39.163079  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:39.163418  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:39.662892  412953 type.go:168] "Request Body" body=""
	I1210 07:48:39.662959  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:39.663228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:40.162925  412953 type.go:168] "Request Body" body=""
	I1210 07:48:40.163032  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:40.163374  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:40.163433  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:40.663119  412953 type.go:168] "Request Body" body=""
	I1210 07:48:40.663199  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:40.663541  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:41.163268  412953 type.go:168] "Request Body" body=""
	I1210 07:48:41.163348  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:41.163613  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:41.663378  412953 type.go:168] "Request Body" body=""
	I1210 07:48:41.663451  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:41.663768  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:42.163629  412953 type.go:168] "Request Body" body=""
	I1210 07:48:42.163723  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:42.164102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:42.164166  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:42.662641  412953 type.go:168] "Request Body" body=""
	I1210 07:48:42.662719  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:42.663050  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:43.162792  412953 type.go:168] "Request Body" body=""
	I1210 07:48:43.162873  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:43.163217  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:43.662974  412953 type.go:168] "Request Body" body=""
	I1210 07:48:43.663077  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:43.663400  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:44.162713  412953 type.go:168] "Request Body" body=""
	I1210 07:48:44.162788  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:44.163107  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:44.663097  412953 type.go:168] "Request Body" body=""
	I1210 07:48:44.663173  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:44.663538  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:44.663598  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:45.163413  412953 type.go:168] "Request Body" body=""
	I1210 07:48:45.163521  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:45.163972  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:45.662711  412953 type.go:168] "Request Body" body=""
	I1210 07:48:45.662787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:45.663074  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:46.162828  412953 type.go:168] "Request Body" body=""
	I1210 07:48:46.162907  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:46.163281  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:46.663004  412953 type.go:168] "Request Body" body=""
	I1210 07:48:46.663101  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:46.663416  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:47.162720  412953 type.go:168] "Request Body" body=""
	I1210 07:48:47.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:47.163096  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:47.163144  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:47.662695  412953 type.go:168] "Request Body" body=""
	I1210 07:48:47.662772  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:47.663201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:48.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:48:48.162861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:48.163200  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:48.662706  412953 type.go:168] "Request Body" body=""
	I1210 07:48:48.662780  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:48.663092  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:49.162753  412953 type.go:168] "Request Body" body=""
	I1210 07:48:49.162836  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:49.163183  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:49.163230  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:49.662751  412953 type.go:168] "Request Body" body=""
	I1210 07:48:49.662831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:49.663185  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:50.162871  412953 type.go:168] "Request Body" body=""
	I1210 07:48:50.162945  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:50.163286  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:50.663056  412953 type.go:168] "Request Body" body=""
	I1210 07:48:50.663136  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:50.663481  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:51.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:48:51.162872  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:51.163240  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:51.163304  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:51.662756  412953 type.go:168] "Request Body" body=""
	I1210 07:48:51.662828  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:51.663133  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:52.162821  412953 type.go:168] "Request Body" body=""
	I1210 07:48:52.162901  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:52.163251  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:52.662959  412953 type.go:168] "Request Body" body=""
	I1210 07:48:52.663046  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:52.663381  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:53.163082  412953 type.go:168] "Request Body" body=""
	I1210 07:48:53.163172  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:53.163460  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:53.163528  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:53.663234  412953 type.go:168] "Request Body" body=""
	I1210 07:48:53.663307  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:53.663639  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:54.163275  412953 type.go:168] "Request Body" body=""
	I1210 07:48:54.163349  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:54.163662  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:54.663667  412953 type.go:168] "Request Body" body=""
	I1210 07:48:54.663741  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:54.663998  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:55.162712  412953 type.go:168] "Request Body" body=""
	I1210 07:48:55.162787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:55.163138  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:55.663042  412953 type.go:168] "Request Body" body=""
	I1210 07:48:55.663118  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:55.663430  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:55.663490  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:56.163120  412953 type.go:168] "Request Body" body=""
	I1210 07:48:56.163192  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:56.163444  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:56.663167  412953 type.go:168] "Request Body" body=""
	I1210 07:48:56.663247  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:56.663592  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:57.163414  412953 type.go:168] "Request Body" body=""
	I1210 07:48:57.163491  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:57.163814  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:57.663518  412953 type.go:168] "Request Body" body=""
	I1210 07:48:57.663593  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:57.663859  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:57.663899  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:58.163700  412953 type.go:168] "Request Body" body=""
	I1210 07:48:58.163775  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:58.164099  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:58.662798  412953 type.go:168] "Request Body" body=""
	I1210 07:48:58.662883  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:58.663263  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:59.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:48:59.162788  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:59.163088  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:59.662782  412953 type.go:168] "Request Body" body=""
	I1210 07:48:59.662866  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:59.663288  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:00.162837  412953 type.go:168] "Request Body" body=""
	I1210 07:49:00.162924  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:00.163302  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:00.163360  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:00.663222  412953 type.go:168] "Request Body" body=""
	I1210 07:49:00.663294  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:00.663553  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:01.163418  412953 type.go:168] "Request Body" body=""
	I1210 07:49:01.163505  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:01.163868  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:01.663688  412953 type.go:168] "Request Body" body=""
	I1210 07:49:01.663772  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:01.664129  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:02.162731  412953 type.go:168] "Request Body" body=""
	I1210 07:49:02.162806  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:02.163127  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:02.662770  412953 type.go:168] "Request Body" body=""
	I1210 07:49:02.662851  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:02.663217  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:02.663293  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:03.162959  412953 type.go:168] "Request Body" body=""
	I1210 07:49:03.163058  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:03.163386  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:03.663598  412953 type.go:168] "Request Body" body=""
	I1210 07:49:03.663674  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:03.663952  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:04.163712  412953 type.go:168] "Request Body" body=""
	I1210 07:49:04.163781  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:04.164105  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:04.663090  412953 type.go:168] "Request Body" body=""
	I1210 07:49:04.663166  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:04.663491  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:04.663550  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:05.163267  412953 type.go:168] "Request Body" body=""
	I1210 07:49:05.163335  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:05.163606  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:05.663590  412953 type.go:168] "Request Body" body=""
	I1210 07:49:05.663661  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:05.663988  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:06.162730  412953 type.go:168] "Request Body" body=""
	I1210 07:49:06.162809  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:06.163147  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:06.662702  412953 type.go:168] "Request Body" body=""
	I1210 07:49:06.662772  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:06.663090  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:07.162769  412953 type.go:168] "Request Body" body=""
	I1210 07:49:07.162842  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:07.163224  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:07.163284  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:07.662818  412953 type.go:168] "Request Body" body=""
	I1210 07:49:07.662894  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:07.663261  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:08.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:49:08.162860  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:08.163175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:08.662740  412953 type.go:168] "Request Body" body=""
	I1210 07:49:08.662817  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:08.663150  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:09.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:49:09.162845  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:09.163192  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:09.662749  412953 type.go:168] "Request Body" body=""
	I1210 07:49:09.662818  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:09.663142  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:09.663197  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:10.162737  412953 type.go:168] "Request Body" body=""
	I1210 07:49:10.162814  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:10.163183  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:10.662970  412953 type.go:168] "Request Body" body=""
	I1210 07:49:10.663057  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:10.663346  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:11.162920  412953 type.go:168] "Request Body" body=""
	I1210 07:49:11.162997  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:11.163332  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:11.662882  412953 type.go:168] "Request Body" body=""
	I1210 07:49:11.662959  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:11.663292  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:11.663351  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:12.163033  412953 type.go:168] "Request Body" body=""
	I1210 07:49:12.163107  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:12.163439  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:12.663269  412953 type.go:168] "Request Body" body=""
	I1210 07:49:12.663343  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:12.663599  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:13.163380  412953 type.go:168] "Request Body" body=""
	I1210 07:49:13.163458  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:13.163773  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:13.663450  412953 type.go:168] "Request Body" body=""
	I1210 07:49:13.663530  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:13.663864  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:13.663928  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:14.163662  412953 type.go:168] "Request Body" body=""
	I1210 07:49:14.163735  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:14.163995  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:14.663028  412953 type.go:168] "Request Body" body=""
	I1210 07:49:14.663102  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:14.663407  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:15.162751  412953 type.go:168] "Request Body" body=""
	I1210 07:49:15.162828  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:15.163189  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:15.662925  412953 type.go:168] "Request Body" body=""
	I1210 07:49:15.662994  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:15.663281  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:16.162756  412953 type.go:168] "Request Body" body=""
	I1210 07:49:16.162841  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:16.163429  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:16.163485  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:16.663183  412953 type.go:168] "Request Body" body=""
	I1210 07:49:16.663269  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:16.663642  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:17.163441  412953 type.go:168] "Request Body" body=""
	I1210 07:49:17.163510  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:17.163771  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:17.663499  412953 type.go:168] "Request Body" body=""
	I1210 07:49:17.663586  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:17.663939  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:18.163578  412953 type.go:168] "Request Body" body=""
	I1210 07:49:18.163666  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:18.164004  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:18.164061  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:18.662710  412953 type.go:168] "Request Body" body=""
	I1210 07:49:18.662778  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:18.663116  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:19.162846  412953 type.go:168] "Request Body" body=""
	I1210 07:49:19.162918  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:19.163279  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:19.662981  412953 type.go:168] "Request Body" body=""
	I1210 07:49:19.663079  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:19.663449  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:20.163115  412953 type.go:168] "Request Body" body=""
	I1210 07:49:20.163185  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:20.163485  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:20.662991  412953 type.go:168] "Request Body" body=""
	I1210 07:49:20.663087  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:20.663401  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:20.663459  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:21.163107  412953 type.go:168] "Request Body" body=""
	I1210 07:49:21.163187  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:21.163503  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:21.663312  412953 type.go:168] "Request Body" body=""
	I1210 07:49:21.663390  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:21.663648  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:22.163403  412953 type.go:168] "Request Body" body=""
	I1210 07:49:22.163478  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:22.163806  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:22.663492  412953 type.go:168] "Request Body" body=""
	I1210 07:49:22.663572  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:22.663904  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:22.663956  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:23.163518  412953 type.go:168] "Request Body" body=""
	I1210 07:49:23.163585  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:23.163836  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:23.663583  412953 type.go:168] "Request Body" body=""
	I1210 07:49:23.663658  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:23.663963  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:24.162717  412953 type.go:168] "Request Body" body=""
	I1210 07:49:24.162795  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:24.163164  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:24.663159  412953 type.go:168] "Request Body" body=""
	I1210 07:49:24.663235  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:24.663577  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:25.163338  412953 type.go:168] "Request Body" body=""
	I1210 07:49:25.163416  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:25.163741  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:25.163799  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:25.663562  412953 type.go:168] "Request Body" body=""
	I1210 07:49:25.663632  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:25.663965  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:26.162670  412953 type.go:168] "Request Body" body=""
	I1210 07:49:26.162742  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:26.162992  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:26.662713  412953 type.go:168] "Request Body" body=""
	I1210 07:49:26.662787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:26.663115  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:27.162844  412953 type.go:168] "Request Body" body=""
	I1210 07:49:27.162918  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:27.163264  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:27.662706  412953 type.go:168] "Request Body" body=""
	I1210 07:49:27.662777  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:27.663066  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:27.663116  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:28.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:49:28.162842  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:28.163197  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:28.662785  412953 type.go:168] "Request Body" body=""
	I1210 07:49:28.662870  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:28.663195  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:29.162663  412953 type.go:168] "Request Body" body=""
	I1210 07:49:29.162733  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:29.163065  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:29.662764  412953 type.go:168] "Request Body" body=""
	I1210 07:49:29.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:29.663210  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:29.663268  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:30.162965  412953 type.go:168] "Request Body" body=""
	I1210 07:49:30.163064  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:30.163387  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:30.663348  412953 type.go:168] "Request Body" body=""
	I1210 07:49:30.663415  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:30.663681  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:31.163525  412953 type.go:168] "Request Body" body=""
	I1210 07:49:31.163602  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:31.163937  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:31.662666  412953 type.go:168] "Request Body" body=""
	I1210 07:49:31.662743  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:31.663114  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:32.162802  412953 type.go:168] "Request Body" body=""
	I1210 07:49:32.162874  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:32.163205  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:32.163272  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:32.662932  412953 type.go:168] "Request Body" body=""
	I1210 07:49:32.663029  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:32.663371  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:33.162797  412953 type.go:168] "Request Body" body=""
	I1210 07:49:33.162872  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:33.163215  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:33.662690  412953 type.go:168] "Request Body" body=""
	I1210 07:49:33.662765  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:33.663040  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:34.162731  412953 type.go:168] "Request Body" body=""
	I1210 07:49:34.162811  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:34.163174  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:34.663120  412953 type.go:168] "Request Body" body=""
	I1210 07:49:34.663195  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:34.663560  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:34.663615  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:35.163345  412953 type.go:168] "Request Body" body=""
	I1210 07:49:35.163419  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:35.163698  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:35.663698  412953 type.go:168] "Request Body" body=""
	I1210 07:49:35.663775  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:35.664098  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:36.162816  412953 type.go:168] "Request Body" body=""
	I1210 07:49:36.162893  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:36.163232  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:36.662691  412953 type.go:168] "Request Body" body=""
	I1210 07:49:36.662766  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:36.663041  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:37.162771  412953 type.go:168] "Request Body" body=""
	I1210 07:49:37.162853  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:37.163191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:37.163246  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:37.662782  412953 type.go:168] "Request Body" body=""
	I1210 07:49:37.662863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:37.663181  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:38.162784  412953 type.go:168] "Request Body" body=""
	I1210 07:49:38.162866  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:38.163339  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:38.663026  412953 type.go:168] "Request Body" body=""
	I1210 07:49:38.663107  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:38.663399  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:39.162747  412953 type.go:168] "Request Body" body=""
	I1210 07:49:39.162828  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:39.163215  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:39.163277  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:39.662785  412953 type.go:168] "Request Body" body=""
	I1210 07:49:39.662860  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:39.663137  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:40.162787  412953 type.go:168] "Request Body" body=""
	I1210 07:49:40.162874  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:40.163221  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:40.663148  412953 type.go:168] "Request Body" body=""
	I1210 07:49:40.663236  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:40.663586  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:41.163335  412953 type.go:168] "Request Body" body=""
	I1210 07:49:41.163407  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:41.163738  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:41.163786  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:41.663533  412953 type.go:168] "Request Body" body=""
	I1210 07:49:41.663607  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:41.663906  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:42.163638  412953 type.go:168] "Request Body" body=""
	I1210 07:49:42.163723  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:42.164115  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:42.663519  412953 type.go:168] "Request Body" body=""
	I1210 07:49:42.663591  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:42.663894  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:43.163685  412953 type.go:168] "Request Body" body=""
	I1210 07:49:43.163760  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:43.164112  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:43.164166  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:43.662835  412953 type.go:168] "Request Body" body=""
	I1210 07:49:43.662918  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:43.663257  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:44.162716  412953 type.go:168] "Request Body" body=""
	I1210 07:49:44.162795  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:44.163078  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:44.663092  412953 type.go:168] "Request Body" body=""
	I1210 07:49:44.663175  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:44.663483  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:45.163326  412953 type.go:168] "Request Body" body=""
	I1210 07:49:45.163419  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:45.163779  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:45.662654  412953 type.go:168] "Request Body" body=""
	I1210 07:49:45.662729  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:45.663065  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:45.663117  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:46.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:49:46.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:46.163223  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:46.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:49:46.662868  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:46.663236  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:47.162751  412953 type.go:168] "Request Body" body=""
	I1210 07:49:47.162831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:47.163170  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:47.662766  412953 type.go:168] "Request Body" body=""
	I1210 07:49:47.662844  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:47.663198  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:47.663257  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:48.162790  412953 type.go:168] "Request Body" body=""
	I1210 07:49:48.162890  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:48.163263  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:48.662709  412953 type.go:168] "Request Body" body=""
	I1210 07:49:48.662776  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:48.663061  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:49.162802  412953 type.go:168] "Request Body" body=""
	I1210 07:49:49.162894  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:49.163340  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:49.662794  412953 type.go:168] "Request Body" body=""
	I1210 07:49:49.662875  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:49.663233  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:49.663295  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:50.163641  412953 type.go:168] "Request Body" body=""
	I1210 07:49:50.163727  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:50.163987  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:50.663073  412953 type.go:168] "Request Body" body=""
	I1210 07:49:50.663155  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:50.663511  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:51.163122  412953 type.go:168] "Request Body" body=""
	I1210 07:49:51.163206  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:51.163540  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:51.663260  412953 type.go:168] "Request Body" body=""
	I1210 07:49:51.663334  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:51.663585  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:51.663641  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:52.163471  412953 type.go:168] "Request Body" body=""
	I1210 07:49:52.163547  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:52.163896  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:52.663720  412953 type.go:168] "Request Body" body=""
	I1210 07:49:52.663796  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:52.664128  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:53.162655  412953 type.go:168] "Request Body" body=""
	I1210 07:49:53.162729  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:53.162984  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:53.662712  412953 type.go:168] "Request Body" body=""
	I1210 07:49:53.662785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:53.663162  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:54.162732  412953 type.go:168] "Request Body" body=""
	I1210 07:49:54.162812  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:54.163175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:54.163230  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:54.663007  412953 type.go:168] "Request Body" body=""
	I1210 07:49:54.663092  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:54.663348  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:55.162765  412953 type.go:168] "Request Body" body=""
	I1210 07:49:55.162839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:55.163203  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:55.662783  412953 type.go:168] "Request Body" body=""
	I1210 07:49:55.662865  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:55.663208  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:56.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:49:56.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:56.163134  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:56.662828  412953 type.go:168] "Request Body" body=""
	I1210 07:49:56.662903  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:56.663245  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:56.663302  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:57.162993  412953 type.go:168] "Request Body" body=""
	I1210 07:49:57.163092  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:57.163459  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:57.663133  412953 type.go:168] "Request Body" body=""
	I1210 07:49:57.663213  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:57.663561  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:58.163311  412953 type.go:168] "Request Body" body=""
	I1210 07:49:58.163387  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:58.163735  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:58.663518  412953 type.go:168] "Request Body" body=""
	I1210 07:49:58.663591  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:58.663925  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:58.663979  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:59.163560  412953 type.go:168] "Request Body" body=""
	I1210 07:49:59.163638  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:59.163958  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:59.662670  412953 type.go:168] "Request Body" body=""
	I1210 07:49:59.662749  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:59.663105  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:00.162806  412953 type.go:168] "Request Body" body=""
	I1210 07:50:00.162893  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:00.163244  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:00.663460  412953 type.go:168] "Request Body" body=""
	I1210 07:50:00.663540  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:00.664068  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:00.664196  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:01.162803  412953 type.go:168] "Request Body" body=""
	I1210 07:50:01.162896  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:01.163273  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:01.662786  412953 type.go:168] "Request Body" body=""
	I1210 07:50:01.662862  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:01.663242  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:02.163061  412953 type.go:168] "Request Body" body=""
	I1210 07:50:02.163133  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:02.163407  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:02.662773  412953 type.go:168] "Request Body" body=""
	I1210 07:50:02.662847  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:02.663177  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:03.162915  412953 type.go:168] "Request Body" body=""
	I1210 07:50:03.162990  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:03.163330  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:03.163387  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:03.662715  412953 type.go:168] "Request Body" body=""
	I1210 07:50:03.662782  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:03.663054  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:04.162790  412953 type.go:168] "Request Body" body=""
	I1210 07:50:04.162866  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:04.163214  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:04.663178  412953 type.go:168] "Request Body" body=""
	I1210 07:50:04.663273  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:04.663624  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:05.163424  412953 type.go:168] "Request Body" body=""
	I1210 07:50:05.163513  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:05.163807  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:05.163852  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:05.662744  412953 type.go:168] "Request Body" body=""
	I1210 07:50:05.662830  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:05.663225  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:06.162951  412953 type.go:168] "Request Body" body=""
	I1210 07:50:06.163056  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:06.163423  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:06.662712  412953 type.go:168] "Request Body" body=""
	I1210 07:50:06.662782  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:06.663083  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:07.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:50:07.162850  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:07.163201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:07.662893  412953 type.go:168] "Request Body" body=""
	I1210 07:50:07.662969  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:07.663309  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:07.663366  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:08.162691  412953 type.go:168] "Request Body" body=""
	I1210 07:50:08.162763  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:08.163063  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:08.662792  412953 type.go:168] "Request Body" body=""
	I1210 07:50:08.662868  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:08.663218  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:09.162919  412953 type.go:168] "Request Body" body=""
	I1210 07:50:09.163000  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:09.163347  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:09.662690  412953 type.go:168] "Request Body" body=""
	I1210 07:50:09.662770  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:09.663051  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:10.162778  412953 type.go:168] "Request Body" body=""
	I1210 07:50:10.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:10.163191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:10.163249  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:10.662950  412953 type.go:168] "Request Body" body=""
	I1210 07:50:10.663039  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:10.663349  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:11.162723  412953 type.go:168] "Request Body" body=""
	I1210 07:50:11.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:11.163072  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:11.662754  412953 type.go:168] "Request Body" body=""
	I1210 07:50:11.662830  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:11.663172  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:12.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:50:12.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:12.163253  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:12.163314  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:12.662721  412953 type.go:168] "Request Body" body=""
	I1210 07:50:12.662806  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:12.663135  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:13.162827  412953 type.go:168] "Request Body" body=""
	I1210 07:50:13.162909  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:13.163270  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:13.662849  412953 type.go:168] "Request Body" body=""
	I1210 07:50:13.662924  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:13.663237  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:14.162709  412953 type.go:168] "Request Body" body=""
	I1210 07:50:14.162777  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:14.163050  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:14.663072  412953 type.go:168] "Request Body" body=""
	I1210 07:50:14.663154  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:14.663480  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:14.663533  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:15.163298  412953 type.go:168] "Request Body" body=""
	I1210 07:50:15.163374  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:15.163686  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:15.662909  412953 node_ready.go:38] duration metric: took 6m0.000357427s for node "functional-314220" to be "Ready" ...
	I1210 07:50:15.669570  412953 out.go:203] 
	W1210 07:50:15.672493  412953 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 07:50:15.672574  412953 out.go:285] * 
	W1210 07:50:15.674736  412953 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:50:15.677520  412953 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.388849086Z" level=info msg="Using the internal default seccomp profile"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.38885711Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.388863068Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.38886886Z" level=info msg="RDT not available in the host system"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.388880865Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.389567258Z" level=info msg="Conmon does support the --sync option"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.389591086Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.389607497Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.390302998Z" level=info msg="Conmon does support the --sync option"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.390324102Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.390445343Z" level=info msg="Updated default CNI network name to "
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.391033462Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.391396799Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.391450141Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.440019086Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.440054435Z" level=info msg="Starting seccomp notifier watcher"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.440099055Z" level=info msg="Create NRI interface"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.440367407Z" level=info msg="built-in NRI default validator is disabled"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.44039327Z" level=info msg="runtime interface created"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.440406103Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.440412708Z" level=info msg="runtime interface starting up..."
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.440423556Z" level=info msg="starting plugins..."
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.440438891Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 07:44:13 functional-314220 crio[5354]: time="2025-12-10T07:44:13.440515126Z" level=info msg="No systemd watchdog enabled"
	Dec 10 07:44:13 functional-314220 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:50:20.092863    8690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:50:20.093434    8690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:50:20.095002    8690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:50:20.095588    8690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:50:20.097231    8690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	[Dec10 07:11] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:23] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:25] overlayfs: idmapped layers are currently not supported
	[  +0.065949] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 07:31] overlayfs: idmapped layers are currently not supported
	[Dec10 07:32] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 07:50:20 up  2:32,  0 user,  load average: 0.33, 0.29, 0.81
	Linux functional-314220 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:50:17 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:17 functional-314220 kubelet[8565]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:17 functional-314220 kubelet[8565]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:17 functional-314220 kubelet[8565]: E1210 07:50:17.972825    8565 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:50:17 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:50:17 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:50:18 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1140.
	Dec 10 07:50:18 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:18 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:18 functional-314220 kubelet[8584]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:18 functional-314220 kubelet[8584]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:18 functional-314220 kubelet[8584]: E1210 07:50:18.673318    8584 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:50:18 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:50:18 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:50:19 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1141.
	Dec 10 07:50:19 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:19 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:19 functional-314220 kubelet[8607]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:19 functional-314220 kubelet[8607]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:19 functional-314220 kubelet[8607]: E1210 07:50:19.477096    8607 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:50:19 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:50:19 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:50:20 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1142.
	Dec 10 07:50:20 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:20 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220: exit status 2 (409.268752ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-314220" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 kubectl -- --context functional-314220 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-314220 kubectl -- --context functional-314220 get pods: exit status 1 (113.129046ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-314220 kubectl -- --context functional-314220 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-314220
helpers_test.go:244: (dbg) docker inspect functional-314220:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	        "Created": "2025-12-10T07:35:53.582567333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 407506,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:35:53.640615938Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hostname",
	        "HostsPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hosts",
	        "LogPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30-json.log",
	        "Name": "/functional-314220",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-314220:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-314220",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	                "LowerDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35-init/diff:/var/lib/docker/overlay2/888a54fd4518421bb2e30bc2f0825f232bffa2f260991e8ba288270662a6554b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-314220",
	                "Source": "/var/lib/docker/volumes/functional-314220/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-314220",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-314220",
	                "name.minikube.sigs.k8s.io": "functional-314220",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a7384df8731cf8cd48ab1dfb3eb79fa2c44cd426e6255cad7345dab2e19d0bfd",
	            "SandboxKey": "/var/run/docker/netns/a7384df8731c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-314220": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:6b:8c:59:cb:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "854df014d6f98ea92f9938b53e3af35a6df7a455941ebb6e97efa575a4d35230",
	                    "EndpointID": "285d153b9d616bc3d729ab764c8a3d3c5b28bb20787fa60a80cbf50869c0c24b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-314220",
	                        "82cb4926192d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220: exit status 2 (307.777724ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-446865 image ls --format short --alsologtostderr                                                                                       │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image   │ functional-446865 image ls --format yaml --alsologtostderr                                                                                        │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ ssh     │ functional-446865 ssh pgrep buildkitd                                                                                                             │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ image   │ functional-446865 image build -t localhost/my-image:functional-446865 testdata/build --alsologtostderr                                            │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image   │ functional-446865 image ls --format json --alsologtostderr                                                                                        │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image   │ functional-446865 image ls                                                                                                                        │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image   │ functional-446865 image ls --format table --alsologtostderr                                                                                       │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ delete  │ -p functional-446865                                                                                                                              │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ start   │ -p functional-314220 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ start   │ -p functional-314220 --alsologtostderr -v=8                                                                                                       │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:44 UTC │                     │
	│ cache   │ functional-314220 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ functional-314220 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ functional-314220 cache add registry.k8s.io/pause:latest                                                                                          │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ functional-314220 cache add minikube-local-cache-test:functional-314220                                                                           │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ functional-314220 cache delete minikube-local-cache-test:functional-314220                                                                        │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ ssh     │ functional-314220 ssh sudo crictl images                                                                                                          │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ ssh     │ functional-314220 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ ssh     │ functional-314220 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │                     │
	│ cache   │ functional-314220 cache reload                                                                                                                    │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ ssh     │ functional-314220 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ kubectl │ functional-314220 kubectl -- --context functional-314220 get pods                                                                                 │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:44:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:44:10.487397  412953 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:44:10.487521  412953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:44:10.487566  412953 out.go:374] Setting ErrFile to fd 2...
	I1210 07:44:10.487572  412953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:44:10.487834  412953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:44:10.488205  412953 out.go:368] Setting JSON to false
	I1210 07:44:10.489052  412953 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8801,"bootTime":1765343850,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:44:10.489127  412953 start.go:143] virtualization:  
	I1210 07:44:10.492628  412953 out.go:179] * [functional-314220] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:44:10.495451  412953 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:44:10.495581  412953 notify.go:221] Checking for updates...
	I1210 07:44:10.501282  412953 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:44:10.504171  412953 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:10.506968  412953 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 07:44:10.509885  412953 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:44:10.512742  412953 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:44:10.516079  412953 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:44:10.516221  412953 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:44:10.539133  412953 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:44:10.539253  412953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:44:10.606789  412953 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:44:10.597593273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:44:10.606896  412953 docker.go:319] overlay module found
	I1210 07:44:10.611915  412953 out.go:179] * Using the docker driver based on existing profile
	I1210 07:44:10.614862  412953 start.go:309] selected driver: docker
	I1210 07:44:10.614885  412953 start.go:927] validating driver "docker" against &{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:44:10.614994  412953 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:44:10.615113  412953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:44:10.673141  412953 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:44:10.664474897 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:44:10.673572  412953 cni.go:84] Creating CNI manager for ""
	I1210 07:44:10.673631  412953 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:44:10.673679  412953 start.go:353] cluster config:
	{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:44:10.678679  412953 out.go:179] * Starting "functional-314220" primary control-plane node in "functional-314220" cluster
	I1210 07:44:10.681372  412953 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 07:44:10.684277  412953 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:44:10.687267  412953 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:44:10.687329  412953 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1210 07:44:10.687343  412953 cache.go:65] Caching tarball of preloaded images
	I1210 07:44:10.687350  412953 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:44:10.687434  412953 preload.go:238] Found /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1210 07:44:10.687444  412953 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 07:44:10.687550  412953 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/config.json ...
	I1210 07:44:10.707132  412953 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:44:10.707156  412953 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:44:10.707176  412953 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:44:10.707214  412953 start.go:360] acquireMachinesLock for functional-314220: {Name:mk24b69a1578b6f8eb2fecd1b5e1ddb99787d8b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:44:10.707283  412953 start.go:364] duration metric: took 45.104µs to acquireMachinesLock for "functional-314220"
	I1210 07:44:10.707306  412953 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:44:10.707317  412953 fix.go:54] fixHost starting: 
	I1210 07:44:10.707577  412953 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:44:10.723920  412953 fix.go:112] recreateIfNeeded on functional-314220: state=Running err=<nil>
	W1210 07:44:10.723951  412953 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:44:10.727176  412953 out.go:252] * Updating the running docker "functional-314220" container ...
	I1210 07:44:10.727205  412953 machine.go:94] provisionDockerMachine start ...
	I1210 07:44:10.727283  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:10.744553  412953 main.go:143] libmachine: Using SSH client type: native
	I1210 07:44:10.744931  412953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:44:10.744946  412953 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:44:10.878742  412953 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-314220
	
	I1210 07:44:10.878763  412953 ubuntu.go:182] provisioning hostname "functional-314220"
	I1210 07:44:10.878828  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:10.897712  412953 main.go:143] libmachine: Using SSH client type: native
	I1210 07:44:10.898057  412953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:44:10.898077  412953 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-314220 && echo "functional-314220" | sudo tee /etc/hostname
	I1210 07:44:11.052065  412953 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-314220
	
	I1210 07:44:11.052160  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:11.072344  412953 main.go:143] libmachine: Using SSH client type: native
	I1210 07:44:11.072686  412953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:44:11.072703  412953 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-314220' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-314220/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-314220' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:44:11.207289  412953 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:44:11.207317  412953 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-376671/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-376671/.minikube}
	I1210 07:44:11.207348  412953 ubuntu.go:190] setting up certificates
	I1210 07:44:11.207366  412953 provision.go:84] configureAuth start
	I1210 07:44:11.207429  412953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:44:11.224935  412953 provision.go:143] copyHostCerts
	I1210 07:44:11.224978  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem
	I1210 07:44:11.225021  412953 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem, removing ...
	I1210 07:44:11.225032  412953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem
	I1210 07:44:11.225107  412953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem (1082 bytes)
	I1210 07:44:11.225201  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem
	I1210 07:44:11.225224  412953 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem, removing ...
	I1210 07:44:11.225234  412953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem
	I1210 07:44:11.225268  412953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem (1123 bytes)
	I1210 07:44:11.225321  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem
	I1210 07:44:11.225345  412953 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem, removing ...
	I1210 07:44:11.225354  412953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem
	I1210 07:44:11.225380  412953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem (1675 bytes)
	I1210 07:44:11.225441  412953 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem org=jenkins.functional-314220 san=[127.0.0.1 192.168.49.2 functional-314220 localhost minikube]
	I1210 07:44:11.417392  412953 provision.go:177] copyRemoteCerts
	I1210 07:44:11.417460  412953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:44:11.417497  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:11.436410  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:11.535532  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 07:44:11.535603  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:44:11.553463  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 07:44:11.553526  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:44:11.571834  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 07:44:11.571892  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:44:11.590409  412953 provision.go:87] duration metric: took 383.016251ms to configureAuth
	I1210 07:44:11.590435  412953 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:44:11.590614  412953 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:44:11.590731  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:11.608257  412953 main.go:143] libmachine: Using SSH client type: native
	I1210 07:44:11.608571  412953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:44:11.608596  412953 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 07:44:11.906129  412953 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 07:44:11.906170  412953 machine.go:97] duration metric: took 1.17895657s to provisionDockerMachine
	I1210 07:44:11.906181  412953 start.go:293] postStartSetup for "functional-314220" (driver="docker")
	I1210 07:44:11.906194  412953 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:44:11.906264  412953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:44:11.906303  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:11.923285  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:12.019543  412953 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:44:12.023176  412953 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 07:44:12.023203  412953 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 07:44:12.023208  412953 command_runner.go:130] > VERSION_ID="12"
	I1210 07:44:12.023217  412953 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 07:44:12.023222  412953 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 07:44:12.023226  412953 command_runner.go:130] > ID=debian
	I1210 07:44:12.023231  412953 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 07:44:12.023236  412953 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 07:44:12.023245  412953 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 07:44:12.023295  412953 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:44:12.023316  412953 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:44:12.023330  412953 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/addons for local assets ...
	I1210 07:44:12.023386  412953 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/files for local assets ...
	I1210 07:44:12.023472  412953 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> 3785282.pem in /etc/ssl/certs
	I1210 07:44:12.023483  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> /etc/ssl/certs/3785282.pem
	I1210 07:44:12.023563  412953 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts -> hosts in /etc/test/nested/copy/378528
	I1210 07:44:12.023571  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts -> /etc/test/nested/copy/378528/hosts
	I1210 07:44:12.023617  412953 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/378528
	I1210 07:44:12.031659  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 07:44:12.049814  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts --> /etc/test/nested/copy/378528/hosts (40 bytes)
	I1210 07:44:12.067644  412953 start.go:296] duration metric: took 161.447867ms for postStartSetup
	I1210 07:44:12.067748  412953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:44:12.067798  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:12.084856  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:12.184547  412953 command_runner.go:130] > 14%
	I1210 07:44:12.184639  412953 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:44:12.189562  412953 command_runner.go:130] > 169G
	I1210 07:44:12.189589  412953 fix.go:56] duration metric: took 1.4822703s for fixHost
	I1210 07:44:12.189600  412953 start.go:83] releasing machines lock for "functional-314220", held for 1.482305303s
	I1210 07:44:12.189668  412953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:44:12.206193  412953 ssh_runner.go:195] Run: cat /version.json
	I1210 07:44:12.206242  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:12.206484  412953 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:44:12.206547  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:12.229509  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:12.231766  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:12.322395  412953 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765319469-22089", "minikube_version": "v1.37.0", "commit": "3b564f551de69272c9de22efc5b37f8a5b0156c7"}
	I1210 07:44:12.322525  412953 ssh_runner.go:195] Run: systemctl --version
	I1210 07:44:12.409743  412953 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1210 07:44:12.412779  412953 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 07:44:12.412818  412953 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 07:44:12.412894  412953 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 07:44:12.460937  412953 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 07:44:12.466609  412953 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 07:44:12.466697  412953 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:44:12.466802  412953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:44:12.474626  412953 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:44:12.474651  412953 start.go:496] detecting cgroup driver to use...
	I1210 07:44:12.474708  412953 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:44:12.474780  412953 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:44:12.490092  412953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:44:12.503562  412953 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:44:12.503627  412953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:44:12.518840  412953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:44:12.531838  412953 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:44:12.642559  412953 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:44:12.762873  412953 docker.go:234] disabling docker service ...
	I1210 07:44:12.762979  412953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:44:12.778725  412953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:44:12.791652  412953 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:44:12.911705  412953 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:44:13.035394  412953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:44:13.049695  412953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:44:13.065431  412953 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1210 07:44:13.065522  412953 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 07:44:13.065609  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.075381  412953 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 07:44:13.075482  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.085452  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.094855  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.104471  412953 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:44:13.112786  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.121728  412953 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.130205  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.139248  412953 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:44:13.145900  412953 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 07:44:13.147163  412953 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:44:13.154995  412953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:44:13.289205  412953 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 07:44:13.445871  412953 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 07:44:13.446002  412953 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 07:44:13.449677  412953 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1210 07:44:13.449750  412953 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 07:44:13.449774  412953 command_runner.go:130] > Device: 0,72	Inode: 1639        Links: 1
	I1210 07:44:13.449787  412953 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 07:44:13.449793  412953 command_runner.go:130] > Access: 2025-12-10 07:44:13.388138306 +0000
	I1210 07:44:13.449816  412953 command_runner.go:130] > Modify: 2025-12-10 07:44:13.388138306 +0000
	I1210 07:44:13.449826  412953 command_runner.go:130] > Change: 2025-12-10 07:44:13.388138306 +0000
	I1210 07:44:13.449830  412953 command_runner.go:130] >  Birth: -
	I1210 07:44:13.449864  412953 start.go:564] Will wait 60s for crictl version
	I1210 07:44:13.449928  412953 ssh_runner.go:195] Run: which crictl
	I1210 07:44:13.453538  412953 command_runner.go:130] > /usr/local/bin/crictl
	I1210 07:44:13.453678  412953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:44:13.477475  412953 command_runner.go:130] > Version:  0.1.0
	I1210 07:44:13.477498  412953 command_runner.go:130] > RuntimeName:  cri-o
	I1210 07:44:13.477503  412953 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1210 07:44:13.477509  412953 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 07:44:13.477520  412953 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 07:44:13.477602  412953 ssh_runner.go:195] Run: crio --version
	I1210 07:44:13.505751  412953 command_runner.go:130] > crio version 1.34.3
	I1210 07:44:13.505796  412953 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1210 07:44:13.505803  412953 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1210 07:44:13.505808  412953 command_runner.go:130] >    GitTreeState:   dirty
	I1210 07:44:13.505813  412953 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1210 07:44:13.505817  412953 command_runner.go:130] >    GoVersion:      go1.24.6
	I1210 07:44:13.505821  412953 command_runner.go:130] >    Compiler:       gc
	I1210 07:44:13.505826  412953 command_runner.go:130] >    Platform:       linux/arm64
	I1210 07:44:13.505835  412953 command_runner.go:130] >    Linkmode:       static
	I1210 07:44:13.505838  412953 command_runner.go:130] >    BuildTags:
	I1210 07:44:13.505844  412953 command_runner.go:130] >      static
	I1210 07:44:13.505848  412953 command_runner.go:130] >      netgo
	I1210 07:44:13.505852  412953 command_runner.go:130] >      osusergo
	I1210 07:44:13.505859  412953 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1210 07:44:13.505863  412953 command_runner.go:130] >      seccomp
	I1210 07:44:13.505874  412953 command_runner.go:130] >      apparmor
	I1210 07:44:13.505877  412953 command_runner.go:130] >      selinux
	I1210 07:44:13.505881  412953 command_runner.go:130] >    LDFlags:          unknown
	I1210 07:44:13.505886  412953 command_runner.go:130] >    SeccompEnabled:   true
	I1210 07:44:13.505895  412953 command_runner.go:130] >    AppArmorEnabled:  false
	I1210 07:44:13.507701  412953 ssh_runner.go:195] Run: crio --version
	I1210 07:44:13.535170  412953 command_runner.go:130] > crio version 1.34.3
	I1210 07:44:13.535233  412953 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1210 07:44:13.535254  412953 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1210 07:44:13.535275  412953 command_runner.go:130] >    GitTreeState:   dirty
	I1210 07:44:13.535296  412953 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1210 07:44:13.535314  412953 command_runner.go:130] >    GoVersion:      go1.24.6
	I1210 07:44:13.535334  412953 command_runner.go:130] >    Compiler:       gc
	I1210 07:44:13.535358  412953 command_runner.go:130] >    Platform:       linux/arm64
	I1210 07:44:13.535377  412953 command_runner.go:130] >    Linkmode:       static
	I1210 07:44:13.535395  412953 command_runner.go:130] >    BuildTags:
	I1210 07:44:13.535414  412953 command_runner.go:130] >      static
	I1210 07:44:13.535432  412953 command_runner.go:130] >      netgo
	I1210 07:44:13.535451  412953 command_runner.go:130] >      osusergo
	I1210 07:44:13.535471  412953 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1210 07:44:13.535489  412953 command_runner.go:130] >      seccomp
	I1210 07:44:13.535518  412953 command_runner.go:130] >      apparmor
	I1210 07:44:13.535548  412953 command_runner.go:130] >      selinux
	I1210 07:44:13.535566  412953 command_runner.go:130] >    LDFlags:          unknown
	I1210 07:44:13.535590  412953 command_runner.go:130] >    SeccompEnabled:   true
	I1210 07:44:13.535609  412953 command_runner.go:130] >    AppArmorEnabled:  false
	I1210 07:44:13.540516  412953 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 07:44:13.543340  412953 cli_runner.go:164] Run: docker network inspect functional-314220 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:44:13.558881  412953 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 07:44:13.562785  412953 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1210 07:44:13.562964  412953 kubeadm.go:884] updating cluster {Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:44:13.563103  412953 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:44:13.563170  412953 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:44:13.592036  412953 command_runner.go:130] > {
	I1210 07:44:13.592059  412953 command_runner.go:130] >   "images":  [
	I1210 07:44:13.592064  412953 command_runner.go:130] >     {
	I1210 07:44:13.592073  412953 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1210 07:44:13.592083  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592089  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1210 07:44:13.592093  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592096  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592118  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1210 07:44:13.592130  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1210 07:44:13.592138  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592144  412953 command_runner.go:130] >       "size":  "111333938",
	I1210 07:44:13.592154  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592159  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592163  412953 command_runner.go:130] >     },
	I1210 07:44:13.592169  412953 command_runner.go:130] >     {
	I1210 07:44:13.592176  412953 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1210 07:44:13.592183  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592189  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 07:44:13.592192  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592196  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592207  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1210 07:44:13.592217  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1210 07:44:13.592221  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592225  412953 command_runner.go:130] >       "size":  "29037500",
	I1210 07:44:13.592231  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592239  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592246  412953 command_runner.go:130] >     },
	I1210 07:44:13.592249  412953 command_runner.go:130] >     {
	I1210 07:44:13.592255  412953 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 07:44:13.592264  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592269  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 07:44:13.592272  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592278  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592286  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1210 07:44:13.592297  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1210 07:44:13.592300  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592306  412953 command_runner.go:130] >       "size":  "74491780",
	I1210 07:44:13.592311  412953 command_runner.go:130] >       "username":  "nonroot",
	I1210 07:44:13.592317  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592320  412953 command_runner.go:130] >     },
	I1210 07:44:13.592329  412953 command_runner.go:130] >     {
	I1210 07:44:13.592338  412953 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1210 07:44:13.592342  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592354  412953 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1210 07:44:13.592357  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592361  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592374  412953 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1210 07:44:13.592381  412953 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1210 07:44:13.592387  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592391  412953 command_runner.go:130] >       "size":  "60857170",
	I1210 07:44:13.592395  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592401  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.592405  412953 command_runner.go:130] >       },
	I1210 07:44:13.592420  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592424  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592429  412953 command_runner.go:130] >     },
	I1210 07:44:13.592433  412953 command_runner.go:130] >     {
	I1210 07:44:13.592446  412953 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1210 07:44:13.592450  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592457  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1210 07:44:13.592461  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592465  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592474  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1210 07:44:13.592484  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1210 07:44:13.592488  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592494  412953 command_runner.go:130] >       "size":  "84949999",
	I1210 07:44:13.592498  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592522  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.592525  412953 command_runner.go:130] >       },
	I1210 07:44:13.592530  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592538  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592541  412953 command_runner.go:130] >     },
	I1210 07:44:13.592545  412953 command_runner.go:130] >     {
	I1210 07:44:13.592556  412953 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1210 07:44:13.592563  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592569  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1210 07:44:13.592579  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592582  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592591  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1210 07:44:13.592602  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1210 07:44:13.592606  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592616  412953 command_runner.go:130] >       "size":  "72170325",
	I1210 07:44:13.592619  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592623  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.592628  412953 command_runner.go:130] >       },
	I1210 07:44:13.592633  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592639  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592642  412953 command_runner.go:130] >     },
	I1210 07:44:13.592645  412953 command_runner.go:130] >     {
	I1210 07:44:13.592652  412953 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1210 07:44:13.592663  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592669  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1210 07:44:13.592674  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592678  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592691  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1210 07:44:13.592702  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1210 07:44:13.592706  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592712  412953 command_runner.go:130] >       "size":  "74106775",
	I1210 07:44:13.592717  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592723  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592726  412953 command_runner.go:130] >     },
	I1210 07:44:13.592729  412953 command_runner.go:130] >     {
	I1210 07:44:13.592735  412953 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1210 07:44:13.592741  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592747  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1210 07:44:13.592750  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592764  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592772  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1210 07:44:13.592793  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1210 07:44:13.592800  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592804  412953 command_runner.go:130] >       "size":  "49822549",
	I1210 07:44:13.592808  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592817  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.592820  412953 command_runner.go:130] >       },
	I1210 07:44:13.592824  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592830  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592834  412953 command_runner.go:130] >     },
	I1210 07:44:13.592843  412953 command_runner.go:130] >     {
	I1210 07:44:13.592849  412953 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 07:44:13.592853  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592858  412953 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 07:44:13.592866  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592870  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592878  412953 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1210 07:44:13.592888  412953 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1210 07:44:13.592892  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592898  412953 command_runner.go:130] >       "size":  "519884",
	I1210 07:44:13.592902  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592911  412953 command_runner.go:130] >         "value":  "65535"
	I1210 07:44:13.592914  412953 command_runner.go:130] >       },
	I1210 07:44:13.592918  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592924  412953 command_runner.go:130] >       "pinned":  true
	I1210 07:44:13.592927  412953 command_runner.go:130] >     }
	I1210 07:44:13.592932  412953 command_runner.go:130] >   ]
	I1210 07:44:13.592935  412953 command_runner.go:130] > }
	I1210 07:44:13.595219  412953 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:44:13.595245  412953 crio.go:433] Images already preloaded, skipping extraction
	I1210 07:44:13.595305  412953 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:44:13.620833  412953 command_runner.go:130] > {
	I1210 07:44:13.620851  412953 command_runner.go:130] >   "images":  [
	I1210 07:44:13.620856  412953 command_runner.go:130] >     {
	I1210 07:44:13.620865  412953 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1210 07:44:13.620870  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.620884  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1210 07:44:13.620888  412953 command_runner.go:130] >       ],
	I1210 07:44:13.620896  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.620905  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1210 07:44:13.620913  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1210 07:44:13.620917  412953 command_runner.go:130] >       ],
	I1210 07:44:13.620921  412953 command_runner.go:130] >       "size":  "111333938",
	I1210 07:44:13.620925  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.620930  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.620933  412953 command_runner.go:130] >     },
	I1210 07:44:13.620936  412953 command_runner.go:130] >     {
	I1210 07:44:13.620943  412953 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1210 07:44:13.620947  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.620952  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 07:44:13.620955  412953 command_runner.go:130] >       ],
	I1210 07:44:13.620958  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.620966  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1210 07:44:13.620975  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1210 07:44:13.620978  412953 command_runner.go:130] >       ],
	I1210 07:44:13.620982  412953 command_runner.go:130] >       "size":  "29037500",
	I1210 07:44:13.620985  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.620991  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.620994  412953 command_runner.go:130] >     },
	I1210 07:44:13.620997  412953 command_runner.go:130] >     {
	I1210 07:44:13.621003  412953 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 07:44:13.621007  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621012  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 07:44:13.621015  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621019  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621027  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1210 07:44:13.621035  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1210 07:44:13.621038  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621042  412953 command_runner.go:130] >       "size":  "74491780",
	I1210 07:44:13.621046  412953 command_runner.go:130] >       "username":  "nonroot",
	I1210 07:44:13.621049  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621056  412953 command_runner.go:130] >     },
	I1210 07:44:13.621059  412953 command_runner.go:130] >     {
	I1210 07:44:13.621066  412953 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1210 07:44:13.621070  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621075  412953 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1210 07:44:13.621079  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621083  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621091  412953 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1210 07:44:13.621098  412953 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1210 07:44:13.621102  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621105  412953 command_runner.go:130] >       "size":  "60857170",
	I1210 07:44:13.621109  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621113  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.621116  412953 command_runner.go:130] >       },
	I1210 07:44:13.621124  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621128  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621131  412953 command_runner.go:130] >     },
	I1210 07:44:13.621134  412953 command_runner.go:130] >     {
	I1210 07:44:13.621143  412953 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1210 07:44:13.621147  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621152  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1210 07:44:13.621156  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621159  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621167  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1210 07:44:13.621175  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1210 07:44:13.621178  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621182  412953 command_runner.go:130] >       "size":  "84949999",
	I1210 07:44:13.621185  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621189  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.621192  412953 command_runner.go:130] >       },
	I1210 07:44:13.621196  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621199  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621202  412953 command_runner.go:130] >     },
	I1210 07:44:13.621208  412953 command_runner.go:130] >     {
	I1210 07:44:13.621214  412953 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1210 07:44:13.621218  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621224  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1210 07:44:13.621227  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621231  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621239  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1210 07:44:13.621247  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1210 07:44:13.621250  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621255  412953 command_runner.go:130] >       "size":  "72170325",
	I1210 07:44:13.621258  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621262  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.621265  412953 command_runner.go:130] >       },
	I1210 07:44:13.621268  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621272  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621275  412953 command_runner.go:130] >     },
	I1210 07:44:13.621278  412953 command_runner.go:130] >     {
	I1210 07:44:13.621285  412953 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1210 07:44:13.621289  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621294  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1210 07:44:13.621297  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621301  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621309  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1210 07:44:13.621317  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1210 07:44:13.621320  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621324  412953 command_runner.go:130] >       "size":  "74106775",
	I1210 07:44:13.621327  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621331  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621334  412953 command_runner.go:130] >     },
	I1210 07:44:13.621337  412953 command_runner.go:130] >     {
	I1210 07:44:13.621343  412953 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1210 07:44:13.621347  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621352  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1210 07:44:13.621359  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621363  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621371  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1210 07:44:13.621390  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1210 07:44:13.621393  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621397  412953 command_runner.go:130] >       "size":  "49822549",
	I1210 07:44:13.621401  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621404  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.621408  412953 command_runner.go:130] >       },
	I1210 07:44:13.621411  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621415  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621418  412953 command_runner.go:130] >     },
	I1210 07:44:13.621421  412953 command_runner.go:130] >     {
	I1210 07:44:13.621427  412953 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 07:44:13.621431  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621435  412953 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 07:44:13.621438  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621442  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621449  412953 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1210 07:44:13.621456  412953 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1210 07:44:13.621459  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621463  412953 command_runner.go:130] >       "size":  "519884",
	I1210 07:44:13.621466  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621470  412953 command_runner.go:130] >         "value":  "65535"
	I1210 07:44:13.621473  412953 command_runner.go:130] >       },
	I1210 07:44:13.621477  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621481  412953 command_runner.go:130] >       "pinned":  true
	I1210 07:44:13.621483  412953 command_runner.go:130] >     }
	I1210 07:44:13.621486  412953 command_runner.go:130] >   ]
	I1210 07:44:13.621490  412953 command_runner.go:130] > }
	I1210 07:44:13.622855  412953 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:44:13.622877  412953 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:44:13.622884  412953 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1210 07:44:13.622995  412953 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-314220 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:44:13.623104  412953 ssh_runner.go:195] Run: crio config
	I1210 07:44:13.670610  412953 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1210 07:44:13.670640  412953 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1210 07:44:13.670648  412953 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1210 07:44:13.670652  412953 command_runner.go:130] > #
	I1210 07:44:13.670659  412953 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1210 07:44:13.670667  412953 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1210 07:44:13.670674  412953 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1210 07:44:13.670691  412953 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1210 07:44:13.670699  412953 command_runner.go:130] > # reload'.
	I1210 07:44:13.670706  412953 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1210 07:44:13.670713  412953 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1210 07:44:13.670722  412953 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1210 07:44:13.670728  412953 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1210 07:44:13.670733  412953 command_runner.go:130] > [crio]
	I1210 07:44:13.670747  412953 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1210 07:44:13.670755  412953 command_runner.go:130] > # containers images, in this directory.
	I1210 07:44:13.670764  412953 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1210 07:44:13.670774  412953 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1210 07:44:13.670784  412953 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1210 07:44:13.670792  412953 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1210 07:44:13.670799  412953 command_runner.go:130] > # imagestore = ""
	I1210 07:44:13.670805  412953 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1210 07:44:13.670812  412953 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1210 07:44:13.670819  412953 command_runner.go:130] > # storage_driver = "overlay"
	I1210 07:44:13.670826  412953 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1210 07:44:13.670832  412953 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1210 07:44:13.670839  412953 command_runner.go:130] > # storage_option = [
	I1210 07:44:13.670842  412953 command_runner.go:130] > # ]
	I1210 07:44:13.670848  412953 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1210 07:44:13.670854  412953 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1210 07:44:13.670864  412953 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1210 07:44:13.670876  412953 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1210 07:44:13.670886  412953 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1210 07:44:13.670890  412953 command_runner.go:130] > # always happen on a node reboot
	I1210 07:44:13.670897  412953 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1210 07:44:13.670908  412953 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1210 07:44:13.670916  412953 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1210 07:44:13.670921  412953 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1210 07:44:13.670927  412953 command_runner.go:130] > # version_file_persist = ""
	I1210 07:44:13.670948  412953 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1210 07:44:13.670957  412953 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1210 07:44:13.670965  412953 command_runner.go:130] > # internal_wipe = true
	I1210 07:44:13.670973  412953 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1210 07:44:13.670982  412953 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1210 07:44:13.670986  412953 command_runner.go:130] > # internal_repair = true
	I1210 07:44:13.670992  412953 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1210 07:44:13.671000  412953 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1210 07:44:13.671005  412953 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1210 07:44:13.671033  412953 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1210 07:44:13.671041  412953 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1210 07:44:13.671047  412953 command_runner.go:130] > [crio.api]
	I1210 07:44:13.671052  412953 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1210 07:44:13.671057  412953 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1210 07:44:13.671064  412953 command_runner.go:130] > # IP address on which the stream server will listen.
	I1210 07:44:13.671297  412953 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1210 07:44:13.671315  412953 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1210 07:44:13.671322  412953 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1210 07:44:13.671326  412953 command_runner.go:130] > # stream_port = "0"
	I1210 07:44:13.671356  412953 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1210 07:44:13.671366  412953 command_runner.go:130] > # stream_enable_tls = false
	I1210 07:44:13.671373  412953 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1210 07:44:13.671558  412953 command_runner.go:130] > # stream_idle_timeout = ""
	I1210 07:44:13.671575  412953 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1210 07:44:13.671582  412953 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1210 07:44:13.671587  412953 command_runner.go:130] > # stream_tls_cert = ""
	I1210 07:44:13.671593  412953 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1210 07:44:13.671617  412953 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1210 07:44:13.671819  412953 command_runner.go:130] > # stream_tls_key = ""
	I1210 07:44:13.671835  412953 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1210 07:44:13.671853  412953 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1210 07:44:13.671864  412953 command_runner.go:130] > # automatically pick up the changes.
	I1210 07:44:13.671868  412953 command_runner.go:130] > # stream_tls_ca = ""
	I1210 07:44:13.671887  412953 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 07:44:13.671896  412953 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1210 07:44:13.671903  412953 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 07:44:13.672102  412953 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1210 07:44:13.672121  412953 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1210 07:44:13.672128  412953 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1210 07:44:13.672131  412953 command_runner.go:130] > [crio.runtime]
	I1210 07:44:13.672137  412953 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1210 07:44:13.672162  412953 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1210 07:44:13.672172  412953 command_runner.go:130] > # "nofile=1024:2048"
	I1210 07:44:13.672179  412953 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1210 07:44:13.672183  412953 command_runner.go:130] > # default_ulimits = [
	I1210 07:44:13.672188  412953 command_runner.go:130] > # ]
	I1210 07:44:13.672195  412953 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1210 07:44:13.672201  412953 command_runner.go:130] > # no_pivot = false
	I1210 07:44:13.672207  412953 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1210 07:44:13.672214  412953 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1210 07:44:13.672219  412953 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1210 07:44:13.672235  412953 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1210 07:44:13.672241  412953 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1210 07:44:13.672248  412953 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 07:44:13.672445  412953 command_runner.go:130] > # conmon = ""
	I1210 07:44:13.672461  412953 command_runner.go:130] > # Cgroup setting for conmon
	I1210 07:44:13.672469  412953 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1210 07:44:13.672473  412953 command_runner.go:130] > conmon_cgroup = "pod"
	I1210 07:44:13.672480  412953 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1210 07:44:13.672502  412953 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1210 07:44:13.672522  412953 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 07:44:13.672875  412953 command_runner.go:130] > # conmon_env = [
	I1210 07:44:13.672888  412953 command_runner.go:130] > # ]
	I1210 07:44:13.672895  412953 command_runner.go:130] > # Additional environment variables to set for all the
	I1210 07:44:13.672900  412953 command_runner.go:130] > # containers. These are overridden if set in the
	I1210 07:44:13.672907  412953 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1210 07:44:13.673114  412953 command_runner.go:130] > # default_env = [
	I1210 07:44:13.673128  412953 command_runner.go:130] > # ]
	I1210 07:44:13.673149  412953 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1210 07:44:13.673177  412953 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1210 07:44:13.673192  412953 command_runner.go:130] > # selinux = false
	I1210 07:44:13.673200  412953 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1210 07:44:13.673211  412953 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1210 07:44:13.673216  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.673222  412953 command_runner.go:130] > # seccomp_profile = ""
	I1210 07:44:13.673228  412953 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1210 07:44:13.673240  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.673428  412953 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1210 07:44:13.673444  412953 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1210 07:44:13.673452  412953 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1210 07:44:13.673459  412953 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1210 07:44:13.673478  412953 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1210 07:44:13.673488  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.673492  412953 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1210 07:44:13.673498  412953 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1210 07:44:13.673505  412953 command_runner.go:130] > # the cgroup blockio controller.
	I1210 07:44:13.673509  412953 command_runner.go:130] > # blockio_config_file = ""
	I1210 07:44:13.673515  412953 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1210 07:44:13.673522  412953 command_runner.go:130] > # blockio parameters.
	I1210 07:44:13.673725  412953 command_runner.go:130] > # blockio_reload = false
	I1210 07:44:13.673738  412953 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1210 07:44:13.673742  412953 command_runner.go:130] > # irqbalance daemon.
	I1210 07:44:13.673748  412953 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1210 07:44:13.673757  412953 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1210 07:44:13.673788  412953 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1210 07:44:13.673801  412953 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1210 07:44:13.673807  412953 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1210 07:44:13.673816  412953 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1210 07:44:13.673821  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.673830  412953 command_runner.go:130] > # rdt_config_file = ""
	I1210 07:44:13.673837  412953 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1210 07:44:13.674053  412953 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1210 07:44:13.674071  412953 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1210 07:44:13.674076  412953 command_runner.go:130] > # separate_pull_cgroup = ""
	I1210 07:44:13.674083  412953 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1210 07:44:13.674102  412953 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1210 07:44:13.674116  412953 command_runner.go:130] > # will be added.
	I1210 07:44:13.674121  412953 command_runner.go:130] > # default_capabilities = [
	I1210 07:44:13.674343  412953 command_runner.go:130] > # 	"CHOWN",
	I1210 07:44:13.674352  412953 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1210 07:44:13.674356  412953 command_runner.go:130] > # 	"FSETID",
	I1210 07:44:13.674359  412953 command_runner.go:130] > # 	"FOWNER",
	I1210 07:44:13.674363  412953 command_runner.go:130] > # 	"SETGID",
	I1210 07:44:13.674366  412953 command_runner.go:130] > # 	"SETUID",
	I1210 07:44:13.674423  412953 command_runner.go:130] > # 	"SETPCAP",
	I1210 07:44:13.674435  412953 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1210 07:44:13.674593  412953 command_runner.go:130] > # 	"KILL",
	I1210 07:44:13.674604  412953 command_runner.go:130] > # ]
	I1210 07:44:13.674621  412953 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1210 07:44:13.674632  412953 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1210 07:44:13.674812  412953 command_runner.go:130] > # add_inheritable_capabilities = false
	I1210 07:44:13.674829  412953 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1210 07:44:13.674836  412953 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 07:44:13.674844  412953 command_runner.go:130] > default_sysctls = [
	I1210 07:44:13.674849  412953 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1210 07:44:13.674855  412953 command_runner.go:130] > ]
	I1210 07:44:13.674860  412953 command_runner.go:130] > # List of devices on the host that a
	I1210 07:44:13.674883  412953 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1210 07:44:13.674902  412953 command_runner.go:130] > # allowed_devices = [
	I1210 07:44:13.675282  412953 command_runner.go:130] > # 	"/dev/fuse",
	I1210 07:44:13.675296  412953 command_runner.go:130] > # 	"/dev/net/tun",
	I1210 07:44:13.675300  412953 command_runner.go:130] > # ]
	I1210 07:44:13.675305  412953 command_runner.go:130] > # List of additional devices. specified as
	I1210 07:44:13.675313  412953 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1210 07:44:13.675339  412953 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1210 07:44:13.675346  412953 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 07:44:13.675350  412953 command_runner.go:130] > # additional_devices = [
	I1210 07:44:13.675524  412953 command_runner.go:130] > # ]
	I1210 07:44:13.675539  412953 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1210 07:44:13.675543  412953 command_runner.go:130] > # cdi_spec_dirs = [
	I1210 07:44:13.675549  412953 command_runner.go:130] > # 	"/etc/cdi",
	I1210 07:44:13.675552  412953 command_runner.go:130] > # 	"/var/run/cdi",
	I1210 07:44:13.675555  412953 command_runner.go:130] > # ]
	I1210 07:44:13.675562  412953 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1210 07:44:13.675584  412953 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1210 07:44:13.675594  412953 command_runner.go:130] > # Defaults to false.
	I1210 07:44:13.675951  412953 command_runner.go:130] > # device_ownership_from_security_context = false
	I1210 07:44:13.675970  412953 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1210 07:44:13.675978  412953 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1210 07:44:13.675982  412953 command_runner.go:130] > # hooks_dir = [
	I1210 07:44:13.676213  412953 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1210 07:44:13.676224  412953 command_runner.go:130] > # ]
	I1210 07:44:13.676231  412953 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1210 07:44:13.676237  412953 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1210 07:44:13.676246  412953 command_runner.go:130] > # its default mounts from the following two files:
	I1210 07:44:13.676261  412953 command_runner.go:130] > #
	I1210 07:44:13.676273  412953 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1210 07:44:13.676280  412953 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1210 07:44:13.676286  412953 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1210 07:44:13.676291  412953 command_runner.go:130] > #
	I1210 07:44:13.676298  412953 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1210 07:44:13.676304  412953 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1210 07:44:13.676313  412953 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1210 07:44:13.676318  412953 command_runner.go:130] > #      only add mounts it finds in this file.
	I1210 07:44:13.676321  412953 command_runner.go:130] > #
	I1210 07:44:13.676325  412953 command_runner.go:130] > # default_mounts_file = ""
	I1210 07:44:13.676345  412953 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1210 07:44:13.676358  412953 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1210 07:44:13.676363  412953 command_runner.go:130] > # pids_limit = -1
	I1210 07:44:13.676375  412953 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1210 07:44:13.676381  412953 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1210 07:44:13.676391  412953 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1210 07:44:13.676400  412953 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1210 07:44:13.676412  412953 command_runner.go:130] > # log_size_max = -1
	I1210 07:44:13.676423  412953 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1210 07:44:13.676626  412953 command_runner.go:130] > # log_to_journald = false
	I1210 07:44:13.676643  412953 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1210 07:44:13.676650  412953 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1210 07:44:13.676677  412953 command_runner.go:130] > # Path to directory for container attach sockets.
	I1210 07:44:13.676879  412953 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1210 07:44:13.676891  412953 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1210 07:44:13.676896  412953 command_runner.go:130] > # bind_mount_prefix = ""
	I1210 07:44:13.676903  412953 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1210 07:44:13.676909  412953 command_runner.go:130] > # read_only = false
	I1210 07:44:13.676916  412953 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1210 07:44:13.676942  412953 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1210 07:44:13.676953  412953 command_runner.go:130] > # live configuration reload.
	I1210 07:44:13.676956  412953 command_runner.go:130] > # log_level = "info"
	I1210 07:44:13.676967  412953 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1210 07:44:13.676977  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.677149  412953 command_runner.go:130] > # log_filter = ""
	I1210 07:44:13.677166  412953 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1210 07:44:13.677173  412953 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1210 07:44:13.677177  412953 command_runner.go:130] > # separated by comma.
	I1210 07:44:13.677186  412953 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 07:44:13.677212  412953 command_runner.go:130] > # uid_mappings = ""
	I1210 07:44:13.677225  412953 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1210 07:44:13.677231  412953 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1210 07:44:13.677238  412953 command_runner.go:130] > # separated by comma.
	I1210 07:44:13.677246  412953 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 07:44:13.677420  412953 command_runner.go:130] > # gid_mappings = ""
	I1210 07:44:13.677432  412953 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1210 07:44:13.677439  412953 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 07:44:13.677446  412953 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 07:44:13.677455  412953 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 07:44:13.677480  412953 command_runner.go:130] > # minimum_mappable_uid = -1
	I1210 07:44:13.677493  412953 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1210 07:44:13.677500  412953 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 07:44:13.677512  412953 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 07:44:13.677522  412953 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 07:44:13.677681  412953 command_runner.go:130] > # minimum_mappable_gid = -1
	I1210 07:44:13.677697  412953 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1210 07:44:13.677705  412953 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1210 07:44:13.677711  412953 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1210 07:44:13.677936  412953 command_runner.go:130] > # ctr_stop_timeout = 30
	I1210 07:44:13.677953  412953 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1210 07:44:13.677960  412953 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1210 07:44:13.677965  412953 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1210 07:44:13.677970  412953 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1210 07:44:13.677991  412953 command_runner.go:130] > # drop_infra_ctr = true
	I1210 07:44:13.678004  412953 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1210 07:44:13.678011  412953 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1210 07:44:13.678020  412953 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1210 07:44:13.678031  412953 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1210 07:44:13.678039  412953 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1210 07:44:13.678048  412953 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1210 07:44:13.678054  412953 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1210 07:44:13.678068  412953 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1210 07:44:13.678282  412953 command_runner.go:130] > # shared_cpuset = ""
	I1210 07:44:13.678299  412953 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1210 07:44:13.678306  412953 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1210 07:44:13.678310  412953 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1210 07:44:13.678328  412953 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1210 07:44:13.678337  412953 command_runner.go:130] > # pinns_path = ""
	I1210 07:44:13.678343  412953 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1210 07:44:13.678349  412953 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1210 07:44:13.678540  412953 command_runner.go:130] > # enable_criu_support = true
	I1210 07:44:13.678551  412953 command_runner.go:130] > # Enable/disable the generation of the container,
	I1210 07:44:13.678558  412953 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1210 07:44:13.678563  412953 command_runner.go:130] > # enable_pod_events = false
	I1210 07:44:13.678572  412953 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1210 07:44:13.678599  412953 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1210 07:44:13.678604  412953 command_runner.go:130] > # default_runtime = "crun"
	I1210 07:44:13.678609  412953 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1210 07:44:13.678622  412953 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1210 07:44:13.678632  412953 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1210 07:44:13.678642  412953 command_runner.go:130] > # creation as a file is not desired either.
	I1210 07:44:13.678651  412953 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1210 07:44:13.678663  412953 command_runner.go:130] > # the hostname is being managed dynamically.
	I1210 07:44:13.678672  412953 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1210 07:44:13.678923  412953 command_runner.go:130] > # ]
	I1210 07:44:13.678950  412953 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1210 07:44:13.678958  412953 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1210 07:44:13.678972  412953 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1210 07:44:13.678982  412953 command_runner.go:130] > # Each entry in the table should follow the format:
	I1210 07:44:13.678985  412953 command_runner.go:130] > #
	I1210 07:44:13.678990  412953 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1210 07:44:13.678995  412953 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1210 07:44:13.679001  412953 command_runner.go:130] > # runtime_type = "oci"
	I1210 07:44:13.679006  412953 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1210 07:44:13.679035  412953 command_runner.go:130] > # inherit_default_runtime = false
	I1210 07:44:13.679045  412953 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1210 07:44:13.679050  412953 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1210 07:44:13.679054  412953 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1210 07:44:13.679060  412953 command_runner.go:130] > # monitor_env = []
	I1210 07:44:13.679065  412953 command_runner.go:130] > # privileged_without_host_devices = false
	I1210 07:44:13.679069  412953 command_runner.go:130] > # allowed_annotations = []
	I1210 07:44:13.679076  412953 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1210 07:44:13.679085  412953 command_runner.go:130] > # no_sync_log = false
	I1210 07:44:13.679101  412953 command_runner.go:130] > # default_annotations = {}
	I1210 07:44:13.679107  412953 command_runner.go:130] > # stream_websockets = false
	I1210 07:44:13.679111  412953 command_runner.go:130] > # seccomp_profile = ""
	I1210 07:44:13.679142  412953 command_runner.go:130] > # Where:
	I1210 07:44:13.679152  412953 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1210 07:44:13.679158  412953 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1210 07:44:13.679174  412953 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1210 07:44:13.679188  412953 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1210 07:44:13.679194  412953 command_runner.go:130] > #   in $PATH.
	I1210 07:44:13.679200  412953 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1210 07:44:13.679207  412953 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1210 07:44:13.679213  412953 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1210 07:44:13.679219  412953 command_runner.go:130] > #   state.
	I1210 07:44:13.679225  412953 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1210 07:44:13.679231  412953 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1210 07:44:13.679240  412953 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1210 07:44:13.679252  412953 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1210 07:44:13.679260  412953 command_runner.go:130] > #   the values from the default runtime on load time.
	I1210 07:44:13.679267  412953 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1210 07:44:13.679274  412953 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1210 07:44:13.679281  412953 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1210 07:44:13.679291  412953 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1210 07:44:13.679296  412953 command_runner.go:130] > #   The currently recognized values are:
	I1210 07:44:13.679302  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1210 07:44:13.679311  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1210 07:44:13.679325  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1210 07:44:13.679338  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1210 07:44:13.679345  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1210 07:44:13.679357  412953 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1210 07:44:13.679365  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1210 07:44:13.679374  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1210 07:44:13.679380  412953 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1210 07:44:13.679398  412953 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1210 07:44:13.679409  412953 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1210 07:44:13.679420  412953 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1210 07:44:13.679430  412953 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1210 07:44:13.679436  412953 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1210 07:44:13.679445  412953 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1210 07:44:13.679452  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1210 07:44:13.679461  412953 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1210 07:44:13.679464  412953 command_runner.go:130] > #   deprecated option "conmon".
	I1210 07:44:13.679478  412953 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1210 07:44:13.679487  412953 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1210 07:44:13.679493  412953 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1210 07:44:13.679503  412953 command_runner.go:130] > #   should be moved to the container's cgroup
	I1210 07:44:13.679511  412953 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1210 07:44:13.679518  412953 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1210 07:44:13.679525  412953 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1210 07:44:13.679531  412953 command_runner.go:130] > #   conmon-rs by using:
	I1210 07:44:13.679539  412953 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1210 07:44:13.679560  412953 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1210 07:44:13.679570  412953 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1210 07:44:13.679579  412953 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1210 07:44:13.679584  412953 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1210 07:44:13.679593  412953 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1210 07:44:13.679603  412953 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1210 07:44:13.679608  412953 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1210 07:44:13.679617  412953 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1210 07:44:13.679637  412953 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1210 07:44:13.679641  412953 command_runner.go:130] > #   when a machine crash happens.
	I1210 07:44:13.679649  412953 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1210 07:44:13.679659  412953 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1210 07:44:13.679667  412953 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1210 07:44:13.679675  412953 command_runner.go:130] > #   seccomp profile for the runtime.
	I1210 07:44:13.679681  412953 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1210 07:44:13.679700  412953 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1210 07:44:13.679707  412953 command_runner.go:130] > #
	I1210 07:44:13.679712  412953 command_runner.go:130] > # Using the seccomp notifier feature:
	I1210 07:44:13.679716  412953 command_runner.go:130] > #
	I1210 07:44:13.679727  412953 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1210 07:44:13.679736  412953 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1210 07:44:13.679742  412953 command_runner.go:130] > #
	I1210 07:44:13.679749  412953 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1210 07:44:13.679756  412953 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1210 07:44:13.679761  412953 command_runner.go:130] > #
	I1210 07:44:13.679773  412953 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1210 07:44:13.679780  412953 command_runner.go:130] > # feature.
	I1210 07:44:13.679782  412953 command_runner.go:130] > #
	I1210 07:44:13.679788  412953 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1210 07:44:13.679799  412953 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1210 07:44:13.679805  412953 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1210 07:44:13.679811  412953 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1210 07:44:13.679819  412953 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1210 07:44:13.679824  412953 command_runner.go:130] > #
	I1210 07:44:13.679831  412953 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1210 07:44:13.679840  412953 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1210 07:44:13.679848  412953 command_runner.go:130] > #
	I1210 07:44:13.679858  412953 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1210 07:44:13.679864  412953 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1210 07:44:13.679869  412953 command_runner.go:130] > #
	I1210 07:44:13.679875  412953 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1210 07:44:13.679881  412953 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1210 07:44:13.679887  412953 command_runner.go:130] > # limitation.
	I1210 07:44:13.679891  412953 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1210 07:44:13.679896  412953 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1210 07:44:13.679902  412953 command_runner.go:130] > runtime_type = ""
	I1210 07:44:13.679909  412953 command_runner.go:130] > runtime_root = "/run/crun"
	I1210 07:44:13.679913  412953 command_runner.go:130] > inherit_default_runtime = false
	I1210 07:44:13.679932  412953 command_runner.go:130] > runtime_config_path = ""
	I1210 07:44:13.679940  412953 command_runner.go:130] > container_min_memory = ""
	I1210 07:44:13.679944  412953 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1210 07:44:13.679948  412953 command_runner.go:130] > monitor_cgroup = "pod"
	I1210 07:44:13.679957  412953 command_runner.go:130] > monitor_exec_cgroup = ""
	I1210 07:44:13.679961  412953 command_runner.go:130] > allowed_annotations = [
	I1210 07:44:13.680169  412953 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1210 07:44:13.680183  412953 command_runner.go:130] > ]
	I1210 07:44:13.680190  412953 command_runner.go:130] > privileged_without_host_devices = false
	I1210 07:44:13.680195  412953 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1210 07:44:13.680200  412953 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1210 07:44:13.680204  412953 command_runner.go:130] > runtime_type = ""
	I1210 07:44:13.680218  412953 command_runner.go:130] > runtime_root = "/run/runc"
	I1210 07:44:13.680228  412953 command_runner.go:130] > inherit_default_runtime = false
	I1210 07:44:13.680233  412953 command_runner.go:130] > runtime_config_path = ""
	I1210 07:44:13.680237  412953 command_runner.go:130] > container_min_memory = ""
	I1210 07:44:13.680244  412953 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1210 07:44:13.680248  412953 command_runner.go:130] > monitor_cgroup = "pod"
	I1210 07:44:13.680257  412953 command_runner.go:130] > monitor_exec_cgroup = ""
	I1210 07:44:13.680461  412953 command_runner.go:130] > privileged_without_host_devices = false
	I1210 07:44:13.680480  412953 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1210 07:44:13.680486  412953 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1210 07:44:13.680503  412953 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1210 07:44:13.680522  412953 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1210 07:44:13.680533  412953 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1210 07:44:13.680547  412953 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1210 07:44:13.680554  412953 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1210 07:44:13.680563  412953 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1210 07:44:13.680579  412953 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1210 07:44:13.680591  412953 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1210 07:44:13.680597  412953 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1210 07:44:13.680609  412953 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1210 07:44:13.680613  412953 command_runner.go:130] > # Example:
	I1210 07:44:13.680617  412953 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1210 07:44:13.680625  412953 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1210 07:44:13.680632  412953 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1210 07:44:13.680643  412953 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1210 07:44:13.680656  412953 command_runner.go:130] > # cpuset = "0-1"
	I1210 07:44:13.680660  412953 command_runner.go:130] > # cpushares = "5"
	I1210 07:44:13.680672  412953 command_runner.go:130] > # cpuquota = "1000"
	I1210 07:44:13.680676  412953 command_runner.go:130] > # cpuperiod = "100000"
	I1210 07:44:13.680680  412953 command_runner.go:130] > # cpulimit = "35"
	I1210 07:44:13.680686  412953 command_runner.go:130] > # Where:
	I1210 07:44:13.680691  412953 command_runner.go:130] > # The workload name is workload-type.
	I1210 07:44:13.680706  412953 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1210 07:44:13.680717  412953 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1210 07:44:13.680730  412953 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1210 07:44:13.680742  412953 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1210 07:44:13.680748  412953 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1210 07:44:13.680756  412953 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1210 07:44:13.680763  412953 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1210 07:44:13.680767  412953 command_runner.go:130] > # Default value is set to true
	I1210 07:44:13.681004  412953 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1210 07:44:13.681022  412953 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1210 07:44:13.681028  412953 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1210 07:44:13.681032  412953 command_runner.go:130] > # Default value is set to 'false'
	I1210 07:44:13.681046  412953 command_runner.go:130] > # disable_hostport_mapping = false
	I1210 07:44:13.681057  412953 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1210 07:44:13.681066  412953 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1210 07:44:13.681072  412953 command_runner.go:130] > # timezone = ""
	I1210 07:44:13.681078  412953 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1210 07:44:13.681082  412953 command_runner.go:130] > #
	I1210 07:44:13.681089  412953 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1210 07:44:13.681101  412953 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1210 07:44:13.681105  412953 command_runner.go:130] > [crio.image]
	I1210 07:44:13.681112  412953 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1210 07:44:13.681133  412953 command_runner.go:130] > # default_transport = "docker://"
	I1210 07:44:13.681145  412953 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1210 07:44:13.681152  412953 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1210 07:44:13.681158  412953 command_runner.go:130] > # global_auth_file = ""
	I1210 07:44:13.681163  412953 command_runner.go:130] > # The image used to instantiate infra containers.
	I1210 07:44:13.681168  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.681175  412953 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1210 07:44:13.681182  412953 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1210 07:44:13.681198  412953 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1210 07:44:13.681207  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.681403  412953 command_runner.go:130] > # pause_image_auth_file = ""
	I1210 07:44:13.681421  412953 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1210 07:44:13.681429  412953 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1210 07:44:13.681436  412953 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1210 07:44:13.681442  412953 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1210 07:44:13.681460  412953 command_runner.go:130] > # pause_command = "/pause"
	I1210 07:44:13.681466  412953 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1210 07:44:13.681473  412953 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1210 07:44:13.681481  412953 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1210 07:44:13.681487  412953 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1210 07:44:13.681495  412953 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1210 07:44:13.681508  412953 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1210 07:44:13.681512  412953 command_runner.go:130] > # pinned_images = [
	I1210 07:44:13.681700  412953 command_runner.go:130] > # ]
	I1210 07:44:13.681712  412953 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1210 07:44:13.681720  412953 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1210 07:44:13.681726  412953 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1210 07:44:13.681733  412953 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1210 07:44:13.681759  412953 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1210 07:44:13.681771  412953 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1210 07:44:13.681777  412953 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1210 07:44:13.681786  412953 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1210 07:44:13.681793  412953 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1210 07:44:13.681800  412953 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1210 07:44:13.681806  412953 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1210 07:44:13.682016  412953 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1210 07:44:13.682034  412953 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1210 07:44:13.682042  412953 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1210 07:44:13.682046  412953 command_runner.go:130] > # changing them here.
	I1210 07:44:13.682052  412953 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1210 07:44:13.682069  412953 command_runner.go:130] > # insecure_registries = [
	I1210 07:44:13.682078  412953 command_runner.go:130] > # ]
	I1210 07:44:13.682085  412953 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1210 07:44:13.682090  412953 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1210 07:44:13.682257  412953 command_runner.go:130] > # image_volumes = "mkdir"
	I1210 07:44:13.682273  412953 command_runner.go:130] > # Temporary directory to use for storing big files
	I1210 07:44:13.682285  412953 command_runner.go:130] > # big_files_temporary_dir = ""
	I1210 07:44:13.682292  412953 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1210 07:44:13.682299  412953 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1210 07:44:13.682504  412953 command_runner.go:130] > # auto_reload_registries = false
	I1210 07:44:13.682520  412953 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1210 07:44:13.682532  412953 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1210 07:44:13.682540  412953 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1210 07:44:13.682567  412953 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1210 07:44:13.682578  412953 command_runner.go:130] > # The mode of short name resolution.
	I1210 07:44:13.682585  412953 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1210 07:44:13.682595  412953 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1210 07:44:13.682600  412953 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1210 07:44:13.682615  412953 command_runner.go:130] > # short_name_mode = "enforcing"
	I1210 07:44:13.682622  412953 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1210 07:44:13.682630  412953 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1210 07:44:13.683045  412953 command_runner.go:130] > # oci_artifact_mount_support = true
	I1210 07:44:13.683063  412953 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1210 07:44:13.683080  412953 command_runner.go:130] > # CNI plugins.
	I1210 07:44:13.683084  412953 command_runner.go:130] > [crio.network]
	I1210 07:44:13.683091  412953 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1210 07:44:13.683100  412953 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1210 07:44:13.683104  412953 command_runner.go:130] > # cni_default_network = ""
	I1210 07:44:13.683110  412953 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1210 07:44:13.683116  412953 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1210 07:44:13.683122  412953 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1210 07:44:13.683126  412953 command_runner.go:130] > # plugin_dirs = [
	I1210 07:44:13.683439  412953 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1210 07:44:13.683727  412953 command_runner.go:130] > # ]
	I1210 07:44:13.683742  412953 command_runner.go:130] > # List of included pod metrics.
	I1210 07:44:13.684014  412953 command_runner.go:130] > # included_pod_metrics = [
	I1210 07:44:13.684312  412953 command_runner.go:130] > # ]
	I1210 07:44:13.684328  412953 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1210 07:44:13.684333  412953 command_runner.go:130] > [crio.metrics]
	I1210 07:44:13.684339  412953 command_runner.go:130] > # Globally enable or disable metrics support.
	I1210 07:44:13.684905  412953 command_runner.go:130] > # enable_metrics = false
	I1210 07:44:13.684921  412953 command_runner.go:130] > # Specify enabled metrics collectors.
	I1210 07:44:13.684926  412953 command_runner.go:130] > # Per default all metrics are enabled.
	I1210 07:44:13.684933  412953 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1210 07:44:13.684946  412953 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1210 07:44:13.684969  412953 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1210 07:44:13.685240  412953 command_runner.go:130] > # metrics_collectors = [
	I1210 07:44:13.685580  412953 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1210 07:44:13.685893  412953 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1210 07:44:13.686203  412953 command_runner.go:130] > # 	"containers_oom_total",
	I1210 07:44:13.686514  412953 command_runner.go:130] > # 	"processes_defunct",
	I1210 07:44:13.686821  412953 command_runner.go:130] > # 	"operations_total",
	I1210 07:44:13.687152  412953 command_runner.go:130] > # 	"operations_latency_seconds",
	I1210 07:44:13.687476  412953 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1210 07:44:13.687786  412953 command_runner.go:130] > # 	"operations_errors_total",
	I1210 07:44:13.688090  412953 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1210 07:44:13.688395  412953 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1210 07:44:13.688727  412953 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1210 07:44:13.689070  412953 command_runner.go:130] > # 	"image_pulls_success_total",
	I1210 07:44:13.689083  412953 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1210 07:44:13.689089  412953 command_runner.go:130] > # 	"containers_oom_count_total",
	I1210 07:44:13.689093  412953 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1210 07:44:13.689098  412953 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1210 07:44:13.689634  412953 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1210 07:44:13.689646  412953 command_runner.go:130] > # ]
	I1210 07:44:13.689654  412953 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1210 07:44:13.689658  412953 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1210 07:44:13.689671  412953 command_runner.go:130] > # The port on which the metrics server will listen.
	I1210 07:44:13.689696  412953 command_runner.go:130] > # metrics_port = 9090
	I1210 07:44:13.689701  412953 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1210 07:44:13.689706  412953 command_runner.go:130] > # metrics_socket = ""
	I1210 07:44:13.689716  412953 command_runner.go:130] > # The certificate for the secure metrics server.
	I1210 07:44:13.689722  412953 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1210 07:44:13.689731  412953 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1210 07:44:13.689737  412953 command_runner.go:130] > # certificate on any modification event.
	I1210 07:44:13.689741  412953 command_runner.go:130] > # metrics_cert = ""
	I1210 07:44:13.689746  412953 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1210 07:44:13.689751  412953 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1210 07:44:13.689764  412953 command_runner.go:130] > # metrics_key = ""
	I1210 07:44:13.689770  412953 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1210 07:44:13.689774  412953 command_runner.go:130] > [crio.tracing]
	I1210 07:44:13.689781  412953 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1210 07:44:13.689785  412953 command_runner.go:130] > # enable_tracing = false
	I1210 07:44:13.689792  412953 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1210 07:44:13.689799  412953 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1210 07:44:13.689806  412953 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1210 07:44:13.689833  412953 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1210 07:44:13.689842  412953 command_runner.go:130] > # CRI-O NRI configuration.
	I1210 07:44:13.689845  412953 command_runner.go:130] > [crio.nri]
	I1210 07:44:13.689850  412953 command_runner.go:130] > # Globally enable or disable NRI.
	I1210 07:44:13.689861  412953 command_runner.go:130] > # enable_nri = true
	I1210 07:44:13.689865  412953 command_runner.go:130] > # NRI socket to listen on.
	I1210 07:44:13.689873  412953 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1210 07:44:13.689877  412953 command_runner.go:130] > # NRI plugin directory to use.
	I1210 07:44:13.689882  412953 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1210 07:44:13.689890  412953 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1210 07:44:13.689894  412953 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1210 07:44:13.689900  412953 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1210 07:44:13.689965  412953 command_runner.go:130] > # nri_disable_connections = false
	I1210 07:44:13.689975  412953 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1210 07:44:13.689991  412953 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1210 07:44:13.689997  412953 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1210 07:44:13.690006  412953 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1210 07:44:13.690011  412953 command_runner.go:130] > # NRI default validator configuration.
	I1210 07:44:13.690018  412953 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1210 07:44:13.690027  412953 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1210 07:44:13.690036  412953 command_runner.go:130] > # can be restricted/rejected:
	I1210 07:44:13.690044  412953 command_runner.go:130] > # - OCI hook injection
	I1210 07:44:13.690060  412953 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1210 07:44:13.690068  412953 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1210 07:44:13.690072  412953 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1210 07:44:13.690076  412953 command_runner.go:130] > # - adjustment of linux namespaces
	I1210 07:44:13.690083  412953 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1210 07:44:13.690093  412953 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1210 07:44:13.690099  412953 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1210 07:44:13.690107  412953 command_runner.go:130] > #
	I1210 07:44:13.690111  412953 command_runner.go:130] > # [crio.nri.default_validator]
	I1210 07:44:13.690115  412953 command_runner.go:130] > # nri_enable_default_validator = false
	I1210 07:44:13.690122  412953 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1210 07:44:13.690134  412953 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1210 07:44:13.690148  412953 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1210 07:44:13.690154  412953 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1210 07:44:13.690159  412953 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1210 07:44:13.690165  412953 command_runner.go:130] > # nri_validator_required_plugins = [
	I1210 07:44:13.690168  412953 command_runner.go:130] > # ]
	I1210 07:44:13.690174  412953 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1210 07:44:13.690182  412953 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1210 07:44:13.690192  412953 command_runner.go:130] > [crio.stats]
	I1210 07:44:13.690198  412953 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1210 07:44:13.690212  412953 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1210 07:44:13.690219  412953 command_runner.go:130] > # stats_collection_period = 0
	I1210 07:44:13.690225  412953 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1210 07:44:13.690232  412953 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1210 07:44:13.690243  412953 command_runner.go:130] > # collection_period = 0
	I1210 07:44:13.692149  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.648702659Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1210 07:44:13.692177  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.648881459Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1210 07:44:13.692188  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.648978856Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1210 07:44:13.692196  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.649067965Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1210 07:44:13.692212  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.649235303Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.692221  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.649618857Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1210 07:44:13.692237  412953 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1210 07:44:13.692317  412953 cni.go:84] Creating CNI manager for ""
	I1210 07:44:13.692335  412953 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:44:13.692359  412953 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:44:13.692385  412953 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-314220 NodeName:functional-314220 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:44:13.692523  412953 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-314220"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:44:13.692606  412953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:44:13.699318  412953 command_runner.go:130] > kubeadm
	I1210 07:44:13.699338  412953 command_runner.go:130] > kubectl
	I1210 07:44:13.699343  412953 command_runner.go:130] > kubelet
	I1210 07:44:13.700197  412953 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:44:13.700295  412953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:44:13.707538  412953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 07:44:13.720130  412953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:44:13.732445  412953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1210 07:44:13.744899  412953 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:44:13.748570  412953 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 07:44:13.748818  412953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:44:13.875367  412953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:44:13.911048  412953 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220 for IP: 192.168.49.2
	I1210 07:44:13.911077  412953 certs.go:195] generating shared ca certs ...
	I1210 07:44:13.911094  412953 certs.go:227] acquiring lock for ca certs: {Name:mk8f0c12407c084bd20b81428ac0dabca9a8cbd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:44:13.911231  412953 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key
	I1210 07:44:13.911285  412953 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key
	I1210 07:44:13.911297  412953 certs.go:257] generating profile certs ...
	I1210 07:44:13.911404  412953 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key
	I1210 07:44:13.911477  412953 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key.8ae59347
	I1210 07:44:13.911525  412953 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key
	I1210 07:44:13.911539  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 07:44:13.911552  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 07:44:13.911567  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 07:44:13.911578  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 07:44:13.911593  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 07:44:13.911610  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 07:44:13.911622  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 07:44:13.911637  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 07:44:13.911683  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem (1338 bytes)
	W1210 07:44:13.911717  412953 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528_empty.pem, impossibly tiny 0 bytes
	I1210 07:44:13.911729  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:44:13.911762  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:44:13.911791  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:44:13.911819  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem (1675 bytes)
	I1210 07:44:13.911865  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 07:44:13.911900  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> /usr/share/ca-certificates/3785282.pem
	I1210 07:44:13.911918  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:13.911928  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem -> /usr/share/ca-certificates/378528.pem
	I1210 07:44:13.912577  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:44:13.931574  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 07:44:13.949287  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:44:13.966704  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:44:13.984537  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:44:14.005273  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:44:14.024726  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:44:14.043246  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:44:14.061500  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /usr/share/ca-certificates/3785282.pem (1708 bytes)
	I1210 07:44:14.078597  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:44:14.096003  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem --> /usr/share/ca-certificates/378528.pem (1338 bytes)
	I1210 07:44:14.113316  412953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:44:14.125784  412953 ssh_runner.go:195] Run: openssl version
	I1210 07:44:14.132223  412953 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 07:44:14.132300  412953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.139621  412953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3785282.pem /etc/ssl/certs/3785282.pem
	I1210 07:44:14.146891  412953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.150749  412953 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 07:35 /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.150804  412953 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 07:35 /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.150854  412953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.191223  412953 command_runner.go:130] > 3ec20f2e
	I1210 07:44:14.191672  412953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:44:14.199095  412953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.206573  412953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:44:14.214321  412953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.218345  412953 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 07:25 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.218446  412953 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 07:25 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.218516  412953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.259240  412953 command_runner.go:130] > b5213941
	I1210 07:44:14.259776  412953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:44:14.267399  412953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.274814  412953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/378528.pem /etc/ssl/certs/378528.pem
	I1210 07:44:14.282253  412953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.286034  412953 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 07:35 /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.286101  412953 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 07:35 /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.286170  412953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.327536  412953 command_runner.go:130] > 51391683
	I1210 07:44:14.327674  412953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:44:14.335034  412953 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:44:14.338581  412953 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:44:14.338609  412953 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1210 07:44:14.338616  412953 command_runner.go:130] > Device: 259,1	Inode: 1322411     Links: 1
	I1210 07:44:14.338623  412953 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 07:44:14.338628  412953 command_runner.go:130] > Access: 2025-12-10 07:40:07.276287392 +0000
	I1210 07:44:14.338634  412953 command_runner.go:130] > Modify: 2025-12-10 07:36:02.667723996 +0000
	I1210 07:44:14.338639  412953 command_runner.go:130] > Change: 2025-12-10 07:36:02.667723996 +0000
	I1210 07:44:14.338644  412953 command_runner.go:130] >  Birth: 2025-12-10 07:36:02.667723996 +0000
	I1210 07:44:14.338702  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:44:14.379186  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.379683  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:44:14.420781  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.421255  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:44:14.461926  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.462055  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:44:14.509912  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.510522  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:44:14.558004  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.558477  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:44:14.599044  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.599455  412953 kubeadm.go:401] StartCluster: {Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:44:14.599550  412953 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:44:14.599615  412953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:44:14.630244  412953 cri.go:89] found id: ""
	I1210 07:44:14.630352  412953 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:44:14.638132  412953 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 07:44:14.638152  412953 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 07:44:14.638158  412953 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 07:44:14.638171  412953 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:44:14.638176  412953 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:44:14.638225  412953 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:44:14.645608  412953 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:44:14.646002  412953 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-314220" does not appear in /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:14.646112  412953 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-376671/kubeconfig needs updating (will repair): [kubeconfig missing "functional-314220" cluster setting kubeconfig missing "functional-314220" context setting]
	I1210 07:44:14.646387  412953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/kubeconfig: {Name:mk62f1b4a63164643ed31a5211abdadfc49db863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:44:14.646808  412953 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:14.646962  412953 kapi.go:59] client config for functional-314220: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key", CAFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 07:44:14.647769  412953 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 07:44:14.647791  412953 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 07:44:14.647797  412953 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 07:44:14.647801  412953 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 07:44:14.647806  412953 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 07:44:14.647858  412953 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 07:44:14.648134  412953 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:44:14.656007  412953 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1210 07:44:14.656041  412953 kubeadm.go:602] duration metric: took 17.859608ms to restartPrimaryControlPlane
	I1210 07:44:14.656051  412953 kubeadm.go:403] duration metric: took 56.601079ms to StartCluster
	I1210 07:44:14.656066  412953 settings.go:142] acquiring lock: {Name:mk83336eaf1e9f7632e16e15e8d9e14eb0e0d0c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:44:14.656132  412953 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:14.656799  412953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/kubeconfig: {Name:mk62f1b4a63164643ed31a5211abdadfc49db863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:44:14.657004  412953 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 07:44:14.657416  412953 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:44:14.657431  412953 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:44:14.658092  412953 addons.go:70] Setting storage-provisioner=true in profile "functional-314220"
	I1210 07:44:14.658110  412953 addons.go:239] Setting addon storage-provisioner=true in "functional-314220"
	I1210 07:44:14.658137  412953 host.go:66] Checking if "functional-314220" exists ...
	I1210 07:44:14.658702  412953 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:44:14.665050  412953 addons.go:70] Setting default-storageclass=true in profile "functional-314220"
	I1210 07:44:14.665125  412953 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-314220"
	I1210 07:44:14.665550  412953 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:44:14.671074  412953 out.go:179] * Verifying Kubernetes components...
	I1210 07:44:14.676445  412953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:44:14.698425  412953 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:44:14.701187  412953 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:14.701211  412953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:44:14.701278  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:14.705662  412953 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:14.705841  412953 kapi.go:59] client config for functional-314220: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key", CAFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 07:44:14.706176  412953 addons.go:239] Setting addon default-storageclass=true in "functional-314220"
	I1210 07:44:14.706207  412953 host.go:66] Checking if "functional-314220" exists ...
	I1210 07:44:14.706646  412953 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:44:14.744732  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:14.744810  412953 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:14.744830  412953 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:44:14.744900  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:14.778977  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:14.876345  412953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:44:14.912899  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:14.922881  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:15.662190  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:15.662227  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.662277  412953 retry.go:31] will retry after 311.954263ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.662347  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:15.662381  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.662389  412953 retry.go:31] will retry after 234.07921ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.662447  412953 node_ready.go:35] waiting up to 6m0s for node "functional-314220" to be "Ready" ...
	I1210 07:44:15.662665  412953 type.go:168] "Request Body" body=""
	I1210 07:44:15.662765  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:15.663157  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:15.897488  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:15.957295  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:15.957408  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.957431  412953 retry.go:31] will retry after 307.155853ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.974530  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:16.030916  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:16.034621  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.034655  412953 retry.go:31] will retry after 246.948718ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.162840  412953 type.go:168] "Request Body" body=""
	I1210 07:44:16.162973  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:16.163310  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:16.265735  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:16.282284  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:16.335651  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:16.339071  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.339103  412953 retry.go:31] will retry after 647.058742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.361763  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:16.361804  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.361822  412953 retry.go:31] will retry after 514.560746ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.663231  412953 type.go:168] "Request Body" body=""
	I1210 07:44:16.663327  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:16.663641  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:16.877219  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:16.942769  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:16.942876  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.942918  412953 retry.go:31] will retry after 1.098847883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.987296  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:17.051987  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:17.055923  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:17.055964  412953 retry.go:31] will retry after 522.145884ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:17.163324  412953 type.go:168] "Request Body" body=""
	I1210 07:44:17.163405  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:17.163711  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:17.578391  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:17.635896  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:17.639746  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:17.639777  412953 retry.go:31] will retry after 768.766099ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:17.662946  412953 type.go:168] "Request Body" body=""
	I1210 07:44:17.663049  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:17.663399  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:17.663474  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:18.042986  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:18.101043  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:18.104777  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:18.104811  412953 retry.go:31] will retry after 877.527078ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:18.163066  412953 type.go:168] "Request Body" body=""
	I1210 07:44:18.163146  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:18.163494  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:18.409040  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:18.473157  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:18.473195  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:18.473221  412953 retry.go:31] will retry after 1.043117699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:18.663503  412953 type.go:168] "Request Body" body=""
	I1210 07:44:18.663629  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:18.663908  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:18.983598  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:19.054379  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:19.057795  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:19.057861  412953 retry.go:31] will retry after 2.806616267s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:19.163140  412953 type.go:168] "Request Body" body=""
	I1210 07:44:19.163219  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:19.163514  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:19.517094  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:19.577109  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:19.577146  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:19.577191  412953 retry.go:31] will retry after 2.260515502s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:19.663401  412953 type.go:168] "Request Body" body=""
	I1210 07:44:19.663487  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:19.663841  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:19.663910  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:20.163656  412953 type.go:168] "Request Body" body=""
	I1210 07:44:20.163728  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:20.164096  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:20.662791  412953 type.go:168] "Request Body" body=""
	I1210 07:44:20.662881  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:20.663196  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:21.162812  412953 type.go:168] "Request Body" body=""
	I1210 07:44:21.162886  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:21.163185  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:21.662729  412953 type.go:168] "Request Body" body=""
	I1210 07:44:21.662808  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:21.663095  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:21.838627  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:21.865153  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:21.916464  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:21.916504  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:21.916523  412953 retry.go:31] will retry after 2.650338189s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:21.931641  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:21.931686  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:21.931712  412953 retry.go:31] will retry after 2.932548046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:22.163174  412953 type.go:168] "Request Body" body=""
	I1210 07:44:22.163252  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:22.163593  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:22.163668  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:22.663491  412953 type.go:168] "Request Body" body=""
	I1210 07:44:22.663596  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:22.663955  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:23.162683  412953 type.go:168] "Request Body" body=""
	I1210 07:44:23.162754  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:23.163096  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:23.662729  412953 type.go:168] "Request Body" body=""
	I1210 07:44:23.662804  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:23.663201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:24.162801  412953 type.go:168] "Request Body" body=""
	I1210 07:44:24.162914  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:24.163280  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:24.567824  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:24.621746  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:24.625216  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:24.625246  412953 retry.go:31] will retry after 7.727905191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:24.663687  412953 type.go:168] "Request Body" body=""
	I1210 07:44:24.663760  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:24.664012  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:24.664064  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:24.864476  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:24.921495  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:24.921557  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:24.921581  412953 retry.go:31] will retry after 3.915945796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:25.162916  412953 type.go:168] "Request Body" body=""
	I1210 07:44:25.162995  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:25.163327  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:25.663045  412953 type.go:168] "Request Body" body=""
	I1210 07:44:25.663124  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:25.663415  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:26.163115  412953 type.go:168] "Request Body" body=""
	I1210 07:44:26.163196  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:26.163520  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:26.663439  412953 type.go:168] "Request Body" body=""
	I1210 07:44:26.663518  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:26.663824  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:27.163578  412953 type.go:168] "Request Body" body=""
	I1210 07:44:27.163681  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:27.164000  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:27.164069  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:27.662756  412953 type.go:168] "Request Body" body=""
	I1210 07:44:27.662823  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:27.663205  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:28.162790  412953 type.go:168] "Request Body" body=""
	I1210 07:44:28.162864  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:28.163219  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:28.662792  412953 type.go:168] "Request Body" body=""
	I1210 07:44:28.662869  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:28.663208  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:28.838651  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:28.899244  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:28.899280  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:28.899298  412953 retry.go:31] will retry after 8.041674514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:29.162702  412953 type.go:168] "Request Body" body=""
	I1210 07:44:29.162772  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:29.163052  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:29.662768  412953 type.go:168] "Request Body" body=""
	I1210 07:44:29.662841  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:29.663169  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:29.663226  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:30.162886  412953 type.go:168] "Request Body" body=""
	I1210 07:44:30.162968  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:30.163308  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:30.662996  412953 type.go:168] "Request Body" body=""
	I1210 07:44:30.663089  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:30.663373  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:31.163117  412953 type.go:168] "Request Body" body=""
	I1210 07:44:31.163198  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:31.163590  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:31.662807  412953 type.go:168] "Request Body" body=""
	I1210 07:44:31.662884  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:31.663200  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:31.663248  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:32.162759  412953 type.go:168] "Request Body" body=""
	I1210 07:44:32.162841  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:32.163175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:32.353668  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:32.409993  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:32.413403  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:32.413432  412953 retry.go:31] will retry after 6.914628842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:32.662780  412953 type.go:168] "Request Body" body=""
	I1210 07:44:32.662856  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:32.663233  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:33.162956  412953 type.go:168] "Request Body" body=""
	I1210 07:44:33.163049  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:33.163356  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:33.662689  412953 type.go:168] "Request Body" body=""
	I1210 07:44:33.662755  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:33.663031  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:34.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:44:34.162848  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:34.163199  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:34.163258  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:34.663111  412953 type.go:168] "Request Body" body=""
	I1210 07:44:34.663191  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:34.663487  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:35.163272  412953 type.go:168] "Request Body" body=""
	I1210 07:44:35.163341  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:35.163701  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:35.663625  412953 type.go:168] "Request Body" body=""
	I1210 07:44:35.663709  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:35.664060  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:36.162762  412953 type.go:168] "Request Body" body=""
	I1210 07:44:36.162838  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:36.163191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:36.663557  412953 type.go:168] "Request Body" body=""
	I1210 07:44:36.663625  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:36.663891  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:36.663931  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:36.941565  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:36.998306  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:37.009698  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:37.009736  412953 retry.go:31] will retry after 8.728706472s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:37.163096  412953 type.go:168] "Request Body" body=""
	I1210 07:44:37.163180  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:37.163526  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:37.663088  412953 type.go:168] "Request Body" body=""
	I1210 07:44:37.663168  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:37.663465  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:38.162753  412953 type.go:168] "Request Body" body=""
	I1210 07:44:38.162830  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:38.163170  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:38.662738  412953 type.go:168] "Request Body" body=""
	I1210 07:44:38.662816  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:38.663200  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:39.162911  412953 type.go:168] "Request Body" body=""
	I1210 07:44:39.162982  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:39.163311  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:39.163365  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:39.328689  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:39.391413  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:39.391461  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:39.391479  412953 retry.go:31] will retry after 20.069023813s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:39.663623  412953 type.go:168] "Request Body" body=""
	I1210 07:44:39.663692  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:39.664007  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:40.163717  412953 type.go:168] "Request Body" body=""
	I1210 07:44:40.163789  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:40.164098  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:40.662854  412953 type.go:168] "Request Body" body=""
	I1210 07:44:40.662926  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:40.663261  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:41.163240  412953 type.go:168] "Request Body" body=""
	I1210 07:44:41.163310  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:41.163588  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:41.163638  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:41.663374  412953 type.go:168] "Request Body" body=""
	I1210 07:44:41.663448  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:41.663787  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:42.163614  412953 type.go:168] "Request Body" body=""
	I1210 07:44:42.163700  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:42.164110  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:42.662817  412953 type.go:168] "Request Body" body=""
	I1210 07:44:42.662893  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:42.663220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:43.162788  412953 type.go:168] "Request Body" body=""
	I1210 07:44:43.162930  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:43.163267  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:43.662791  412953 type.go:168] "Request Body" body=""
	I1210 07:44:43.662874  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:43.663242  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:43.663300  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:44.162963  412953 type.go:168] "Request Body" body=""
	I1210 07:44:44.163057  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:44.163345  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:44.663123  412953 type.go:168] "Request Body" body=""
	I1210 07:44:44.663199  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:44.663539  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:45.163160  412953 type.go:168] "Request Body" body=""
	I1210 07:44:45.163248  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:45.163618  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:45.663558  412953 type.go:168] "Request Body" body=""
	I1210 07:44:45.663640  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:45.663930  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:45.663983  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:45.739308  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:45.803966  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:45.804014  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:45.804032  412953 retry.go:31] will retry after 15.619557427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:46.163368  412953 type.go:168] "Request Body" body=""
	I1210 07:44:46.163449  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:46.163809  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:46.663723  412953 type.go:168] "Request Body" body=""
	I1210 07:44:46.663804  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:46.664157  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:47.162830  412953 type.go:168] "Request Body" body=""
	I1210 07:44:47.162904  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:47.163246  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:47.662803  412953 type.go:168] "Request Body" body=""
	I1210 07:44:47.662878  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:47.663201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:48.162784  412953 type.go:168] "Request Body" body=""
	I1210 07:44:48.162868  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:48.163231  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:48.163295  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:48.662914  412953 type.go:168] "Request Body" body=""
	I1210 07:44:48.662989  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:48.663322  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:49.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:44:49.162856  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:49.163180  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:49.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:44:49.662861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:49.663191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:50.162736  412953 type.go:168] "Request Body" body=""
	I1210 07:44:50.162810  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:50.163100  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:50.662994  412953 type.go:168] "Request Body" body=""
	I1210 07:44:50.663140  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:50.663480  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:50.663536  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:51.163315  412953 type.go:168] "Request Body" body=""
	I1210 07:44:51.163397  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:51.163738  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:51.663484  412953 type.go:168] "Request Body" body=""
	I1210 07:44:51.663554  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:51.663817  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:52.163592  412953 type.go:168] "Request Body" body=""
	I1210 07:44:52.163675  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:52.164024  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:52.663725  412953 type.go:168] "Request Body" body=""
	I1210 07:44:52.663805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:52.664170  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:52.664269  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:53.162915  412953 type.go:168] "Request Body" body=""
	I1210 07:44:53.162989  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:53.163353  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:53.662767  412953 type.go:168] "Request Body" body=""
	I1210 07:44:53.662844  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:53.663173  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:54.162859  412953 type.go:168] "Request Body" body=""
	I1210 07:44:54.162935  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:54.163287  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:54.663094  412953 type.go:168] "Request Body" body=""
	I1210 07:44:54.663170  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:54.663454  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:55.163141  412953 type.go:168] "Request Body" body=""
	I1210 07:44:55.163215  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:55.163544  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:55.163602  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:55.663472  412953 type.go:168] "Request Body" body=""
	I1210 07:44:55.663544  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:55.663857  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:56.163640  412953 type.go:168] "Request Body" body=""
	I1210 07:44:56.163716  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:56.163996  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:56.662721  412953 type.go:168] "Request Body" body=""
	I1210 07:44:56.662805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:56.663197  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:57.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:44:57.162851  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:57.163199  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:57.662711  412953 type.go:168] "Request Body" body=""
	I1210 07:44:57.662787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:57.663158  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:57.663213  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:58.162868  412953 type.go:168] "Request Body" body=""
	I1210 07:44:58.162941  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:58.163282  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:58.662753  412953 type.go:168] "Request Body" body=""
	I1210 07:44:58.662830  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:58.663191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:59.162735  412953 type.go:168] "Request Body" body=""
	I1210 07:44:59.162805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:59.163143  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:59.460756  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:59.515959  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:59.519313  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:59.519349  412953 retry.go:31] will retry after 28.214559207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:59.663650  412953 type.go:168] "Request Body" body=""
	I1210 07:44:59.663726  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:59.664046  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:59.664099  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:00.162860  412953 type.go:168] "Request Body" body=""
	I1210 07:45:00.162952  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:00.163293  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:00.671201  412953 type.go:168] "Request Body" body=""
	I1210 07:45:00.671283  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:00.671619  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:01.163405  412953 type.go:168] "Request Body" body=""
	I1210 07:45:01.163498  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:01.163856  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:01.424291  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:45:01.504370  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:01.504408  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:01.504426  412953 retry.go:31] will retry after 11.28420248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:01.662884  412953 type.go:168] "Request Body" body=""
	I1210 07:45:01.662972  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:01.663364  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:02.162730  412953 type.go:168] "Request Body" body=""
	I1210 07:45:02.162802  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:02.163079  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:02.163130  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:02.662859  412953 type.go:168] "Request Body" body=""
	I1210 07:45:02.662943  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:02.663293  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:03.163040  412953 type.go:168] "Request Body" body=""
	I1210 07:45:03.163132  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:03.163447  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:03.663127  412953 type.go:168] "Request Body" body=""
	I1210 07:45:03.663202  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:03.663476  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:04.162835  412953 type.go:168] "Request Body" body=""
	I1210 07:45:04.162911  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:04.163270  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:04.163341  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:04.663266  412953 type.go:168] "Request Body" body=""
	I1210 07:45:04.663342  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:04.663667  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:05.163290  412953 type.go:168] "Request Body" body=""
	I1210 07:45:05.163383  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:05.163646  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:05.663681  412953 type.go:168] "Request Body" body=""
	I1210 07:45:05.663763  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:05.664122  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:06.162850  412953 type.go:168] "Request Body" body=""
	I1210 07:45:06.162937  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:06.163320  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:06.163376  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:06.663060  412953 type.go:168] "Request Body" body=""
	I1210 07:45:06.663135  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:06.663501  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:07.163345  412953 type.go:168] "Request Body" body=""
	I1210 07:45:07.163422  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:07.163774  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:07.663630  412953 type.go:168] "Request Body" body=""
	I1210 07:45:07.663719  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:07.664148  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:08.162743  412953 type.go:168] "Request Body" body=""
	I1210 07:45:08.162831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:08.163182  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:08.662764  412953 type.go:168] "Request Body" body=""
	I1210 07:45:08.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:08.663190  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:08.663246  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:09.162931  412953 type.go:168] "Request Body" body=""
	I1210 07:45:09.163030  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:09.163428  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:09.663118  412953 type.go:168] "Request Body" body=""
	I1210 07:45:09.663196  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:09.663528  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:10.163340  412953 type.go:168] "Request Body" body=""
	I1210 07:45:10.163412  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:10.163740  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:10.663633  412953 type.go:168] "Request Body" body=""
	I1210 07:45:10.663733  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:10.664152  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:10.664210  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:11.162858  412953 type.go:168] "Request Body" body=""
	I1210 07:45:11.162931  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:11.163204  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:11.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:45:11.662878  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:11.663220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:12.162791  412953 type.go:168] "Request Body" body=""
	I1210 07:45:12.162863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:12.163204  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:12.662891  412953 type.go:168] "Request Body" body=""
	I1210 07:45:12.662962  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:12.663257  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:12.789627  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:45:12.850283  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:12.850328  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:12.850347  412953 retry.go:31] will retry after 28.725170788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:13.162732  412953 type.go:168] "Request Body" body=""
	I1210 07:45:13.162821  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:13.163228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:13.163286  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:13.662967  412953 type.go:168] "Request Body" body=""
	I1210 07:45:13.663064  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:13.663335  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:14.162706  412953 type.go:168] "Request Body" body=""
	I1210 07:45:14.162805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:14.163117  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:14.663061  412953 type.go:168] "Request Body" body=""
	I1210 07:45:14.663142  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:14.663460  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:15.163177  412953 type.go:168] "Request Body" body=""
	I1210 07:45:15.163253  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:15.163541  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:15.163586  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:15.663389  412953 type.go:168] "Request Body" body=""
	I1210 07:45:15.663465  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:15.663723  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:16.163504  412953 type.go:168] "Request Body" body=""
	I1210 07:45:16.163583  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:16.163918  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:16.663729  412953 type.go:168] "Request Body" body=""
	I1210 07:45:16.663810  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:16.664140  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:17.162749  412953 type.go:168] "Request Body" body=""
	I1210 07:45:17.162824  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:17.163138  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:17.662762  412953 type.go:168] "Request Body" body=""
	I1210 07:45:17.662834  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:17.663192  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:17.663252  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:18.162797  412953 type.go:168] "Request Body" body=""
	I1210 07:45:18.162875  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:18.163247  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:18.662659  412953 type.go:168] "Request Body" body=""
	I1210 07:45:18.662732  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:18.663041  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:19.162730  412953 type.go:168] "Request Body" body=""
	I1210 07:45:19.162836  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:19.163176  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:19.662762  412953 type.go:168] "Request Body" body=""
	I1210 07:45:19.662843  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:19.663196  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:20.162705  412953 type.go:168] "Request Body" body=""
	I1210 07:45:20.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:20.163094  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:20.163139  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:20.662988  412953 type.go:168] "Request Body" body=""
	I1210 07:45:20.663092  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:20.663402  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:21.162784  412953 type.go:168] "Request Body" body=""
	I1210 07:45:21.162867  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:21.163180  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:21.662681  412953 type.go:168] "Request Body" body=""
	I1210 07:45:21.662754  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:21.663104  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:22.162781  412953 type.go:168] "Request Body" body=""
	I1210 07:45:22.162857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:22.163226  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:22.163284  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:22.662965  412953 type.go:168] "Request Body" body=""
	I1210 07:45:22.663056  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:22.663403  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:23.162971  412953 type.go:168] "Request Body" body=""
	I1210 07:45:23.163064  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:23.163391  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:23.663117  412953 type.go:168] "Request Body" body=""
	I1210 07:45:23.663191  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:23.663529  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:24.163373  412953 type.go:168] "Request Body" body=""
	I1210 07:45:24.163447  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:24.163793  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:24.163855  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:24.663660  412953 type.go:168] "Request Body" body=""
	I1210 07:45:24.663735  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:24.664001  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:25.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:45:25.162855  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:25.163242  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:25.662975  412953 type.go:168] "Request Body" body=""
	I1210 07:45:25.663066  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:25.663402  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:26.163074  412953 type.go:168] "Request Body" body=""
	I1210 07:45:26.163154  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:26.163422  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:26.663142  412953 type.go:168] "Request Body" body=""
	I1210 07:45:26.663225  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:26.663574  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:26.663627  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:27.163384  412953 type.go:168] "Request Body" body=""
	I1210 07:45:27.163458  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:27.163787  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:27.663537  412953 type.go:168] "Request Body" body=""
	I1210 07:45:27.663608  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:27.663914  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:27.734263  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:45:27.790479  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:27.794248  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:27.794290  412953 retry.go:31] will retry after 44.751938518s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:28.162814  412953 type.go:168] "Request Body" body=""
	I1210 07:45:28.162897  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:28.163247  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:28.662786  412953 type.go:168] "Request Body" body=""
	I1210 07:45:28.662878  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:28.663225  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:29.162904  412953 type.go:168] "Request Body" body=""
	I1210 07:45:29.162995  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:29.163308  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:29.163369  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:29.662782  412953 type.go:168] "Request Body" body=""
	I1210 07:45:29.662858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:29.663198  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:30.162922  412953 type.go:168] "Request Body" body=""
	I1210 07:45:30.162996  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:30.163332  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:30.663188  412953 type.go:168] "Request Body" body=""
	I1210 07:45:30.663257  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:30.663510  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:31.163355  412953 type.go:168] "Request Body" body=""
	I1210 07:45:31.163426  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:31.163801  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:31.163859  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:31.663621  412953 type.go:168] "Request Body" body=""
	I1210 07:45:31.663699  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:31.664023  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:32.163649  412953 type.go:168] "Request Body" body=""
	I1210 07:45:32.163722  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:32.163979  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:32.662687  412953 type.go:168] "Request Body" body=""
	I1210 07:45:32.662761  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:32.663068  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:33.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:45:33.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:33.163206  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:33.662712  412953 type.go:168] "Request Body" body=""
	I1210 07:45:33.662785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:33.663080  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:33.663132  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:34.162759  412953 type.go:168] "Request Body" body=""
	I1210 07:45:34.162838  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:34.163153  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:34.662951  412953 type.go:168] "Request Body" body=""
	I1210 07:45:34.663056  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:34.663400  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:35.162743  412953 type.go:168] "Request Body" body=""
	I1210 07:45:35.162820  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:35.163221  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:35.663143  412953 type.go:168] "Request Body" body=""
	I1210 07:45:35.663217  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:35.663541  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:35.663594  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:36.163366  412953 type.go:168] "Request Body" body=""
	I1210 07:45:36.163455  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:36.163788  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:36.663603  412953 type.go:168] "Request Body" body=""
	I1210 07:45:36.663693  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:36.664111  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:37.162779  412953 type.go:168] "Request Body" body=""
	I1210 07:45:37.162853  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:37.163211  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:37.662799  412953 type.go:168] "Request Body" body=""
	I1210 07:45:37.662884  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:37.663251  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:38.162931  412953 type.go:168] "Request Body" body=""
	I1210 07:45:38.163048  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:38.163386  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:38.163440  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:38.662746  412953 type.go:168] "Request Body" body=""
	I1210 07:45:38.662816  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:38.663178  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:39.162879  412953 type.go:168] "Request Body" body=""
	I1210 07:45:39.162950  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:39.163283  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:39.662744  412953 type.go:168] "Request Body" body=""
	I1210 07:45:39.662813  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:39.663099  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:40.162783  412953 type.go:168] "Request Body" body=""
	I1210 07:45:40.162864  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:40.163201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:40.663276  412953 type.go:168] "Request Body" body=""
	I1210 07:45:40.663359  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:40.663683  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:40.663745  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:41.163455  412953 type.go:168] "Request Body" body=""
	I1210 07:45:41.163527  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:41.163780  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:41.576476  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:45:41.640104  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:41.640160  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:41.640252  412953 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:45:41.663359  412953 type.go:168] "Request Body" body=""
	I1210 07:45:41.663436  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:41.663747  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:42.163603  412953 type.go:168] "Request Body" body=""
	I1210 07:45:42.163686  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:42.164024  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:42.663504  412953 type.go:168] "Request Body" body=""
	I1210 07:45:42.663576  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:42.663841  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:42.663882  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:43.163706  412953 type.go:168] "Request Body" body=""
	I1210 07:45:43.163778  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:43.164093  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:43.662795  412953 type.go:168] "Request Body" body=""
	I1210 07:45:43.662871  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:43.663216  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:44.163690  412953 type.go:168] "Request Body" body=""
	I1210 07:45:44.163770  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:44.164102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:44.662956  412953 type.go:168] "Request Body" body=""
	I1210 07:45:44.663053  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:44.663384  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:45.162824  412953 type.go:168] "Request Body" body=""
	I1210 07:45:45.162919  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:45.163386  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:45.163455  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:45.663325  412953 type.go:168] "Request Body" body=""
	I1210 07:45:45.663415  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:45.663778  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:46.163560  412953 type.go:168] "Request Body" body=""
	I1210 07:45:46.163643  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:46.164039  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:46.662756  412953 type.go:168] "Request Body" body=""
	I1210 07:45:46.662831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:46.663365  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:47.162707  412953 type.go:168] "Request Body" body=""
	I1210 07:45:47.162778  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:47.163063  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:47.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:45:47.662856  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:47.663225  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:47.663287  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:48.162991  412953 type.go:168] "Request Body" body=""
	I1210 07:45:48.163078  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:48.163366  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:48.662730  412953 type.go:168] "Request Body" body=""
	I1210 07:45:48.662809  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:48.663141  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:49.162823  412953 type.go:168] "Request Body" body=""
	I1210 07:45:49.162918  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:49.163244  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:49.662967  412953 type.go:168] "Request Body" body=""
	I1210 07:45:49.663054  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:49.663382  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:49.663448  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:50.162950  412953 type.go:168] "Request Body" body=""
	I1210 07:45:50.163048  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:50.163436  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:50.663184  412953 type.go:168] "Request Body" body=""
	I1210 07:45:50.663257  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:50.663570  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:51.163371  412953 type.go:168] "Request Body" body=""
	I1210 07:45:51.163453  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:51.163741  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:51.663498  412953 type.go:168] "Request Body" body=""
	I1210 07:45:51.663588  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:51.663902  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:51.663988  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:52.162782  412953 type.go:168] "Request Body" body=""
	I1210 07:45:52.162900  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:52.163274  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:52.663034  412953 type.go:168] "Request Body" body=""
	I1210 07:45:52.663107  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:52.663447  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:53.163126  412953 type.go:168] "Request Body" body=""
	I1210 07:45:53.163192  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:53.163480  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:53.663263  412953 type.go:168] "Request Body" body=""
	I1210 07:45:53.663337  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:53.663619  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:54.163395  412953 type.go:168] "Request Body" body=""
	I1210 07:45:54.163472  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:54.163802  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:54.163854  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:54.663613  412953 type.go:168] "Request Body" body=""
	I1210 07:45:54.663694  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:54.664023  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:55.163717  412953 type.go:168] "Request Body" body=""
	I1210 07:45:55.163799  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:55.164118  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:55.662998  412953 type.go:168] "Request Body" body=""
	I1210 07:45:55.663091  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:55.663450  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:56.163121  412953 type.go:168] "Request Body" body=""
	I1210 07:45:56.163190  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:56.163520  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:56.663288  412953 type.go:168] "Request Body" body=""
	I1210 07:45:56.663383  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:56.663710  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:56.663763  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:57.163429  412953 type.go:168] "Request Body" body=""
	I1210 07:45:57.163515  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:57.163853  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:57.663634  412953 type.go:168] "Request Body" body=""
	I1210 07:45:57.663715  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:57.664039  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:58.162743  412953 type.go:168] "Request Body" body=""
	I1210 07:45:58.162822  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:58.163189  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:58.662924  412953 type.go:168] "Request Body" body=""
	I1210 07:45:58.663036  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:58.663358  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:59.162715  412953 type.go:168] "Request Body" body=""
	I1210 07:45:59.162781  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:59.163074  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:59.163122  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:59.662789  412953 type.go:168] "Request Body" body=""
	I1210 07:45:59.662865  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:59.663251  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:00.162998  412953 type.go:168] "Request Body" body=""
	I1210 07:46:00.163123  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:00.163461  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:00.663540  412953 type.go:168] "Request Body" body=""
	I1210 07:46:00.663611  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:00.663865  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:01.163683  412953 type.go:168] "Request Body" body=""
	I1210 07:46:01.163757  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:01.164109  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:01.164167  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:01.662851  412953 type.go:168] "Request Body" body=""
	I1210 07:46:01.662929  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:01.663282  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:02.162953  412953 type.go:168] "Request Body" body=""
	I1210 07:46:02.163054  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:02.163311  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:02.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:46:02.662862  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:02.663241  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:03.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:46:03.162854  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:03.163220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:03.662701  412953 type.go:168] "Request Body" body=""
	I1210 07:46:03.662776  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:03.663100  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:03.663150  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:04.162777  412953 type.go:168] "Request Body" body=""
	I1210 07:46:04.162853  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:04.163234  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:04.663099  412953 type.go:168] "Request Body" body=""
	I1210 07:46:04.663184  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:04.663539  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:05.163328  412953 type.go:168] "Request Body" body=""
	I1210 07:46:05.163396  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:05.163668  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:05.663706  412953 type.go:168] "Request Body" body=""
	I1210 07:46:05.663785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:05.664109  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:05.664167  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:06.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:46:06.162864  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:06.163213  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:06.662683  412953 type.go:168] "Request Body" body=""
	I1210 07:46:06.662757  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:06.663110  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:07.162833  412953 type.go:168] "Request Body" body=""
	I1210 07:46:07.162915  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:07.163278  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:07.663001  412953 type.go:168] "Request Body" body=""
	I1210 07:46:07.663101  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:07.663452  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:08.163105  412953 type.go:168] "Request Body" body=""
	I1210 07:46:08.163173  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:08.163505  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:08.163551  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:08.663246  412953 type.go:168] "Request Body" body=""
	I1210 07:46:08.663355  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:08.663696  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:09.163360  412953 type.go:168] "Request Body" body=""
	I1210 07:46:09.163436  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:09.163764  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:09.663545  412953 type.go:168] "Request Body" body=""
	I1210 07:46:09.663613  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:09.663865  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:10.163582  412953 type.go:168] "Request Body" body=""
	I1210 07:46:10.163660  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:10.164166  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:10.164222  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:10.662978  412953 type.go:168] "Request Body" body=""
	I1210 07:46:10.663066  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:10.663429  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:11.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:46:11.162791  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:11.163072  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:11.662759  412953 type.go:168] "Request Body" body=""
	I1210 07:46:11.662839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:11.663211  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:12.162917  412953 type.go:168] "Request Body" body=""
	I1210 07:46:12.162995  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:12.163357  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:12.546948  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:46:12.609717  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:46:12.609753  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:46:12.609836  412953 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:46:12.614848  412953 out.go:179] * Enabled addons: 
	I1210 07:46:12.617540  412953 addons.go:530] duration metric: took 1m57.960111858s for enable addons: enabled=[]
	I1210 07:46:12.662919  412953 type.go:168] "Request Body" body=""
	I1210 07:46:12.663005  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:12.663288  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:12.663340  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:13.162812  412953 type.go:168] "Request Body" body=""
	I1210 07:46:13.162891  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:13.163241  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:13.662827  412953 type.go:168] "Request Body" body=""
	I1210 07:46:13.662909  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:13.663262  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:14.162721  412953 type.go:168] "Request Body" body=""
	I1210 07:46:14.162802  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:14.163126  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:14.663062  412953 type.go:168] "Request Body" body=""
	I1210 07:46:14.663130  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:14.663461  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:14.663518  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:15.163082  412953 type.go:168] "Request Body" body=""
	I1210 07:46:15.163169  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:15.163586  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:15.662688  412953 type.go:168] "Request Body" body=""
	I1210 07:46:15.662765  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:15.663046  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:16.162762  412953 type.go:168] "Request Body" body=""
	I1210 07:46:16.162843  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:16.163210  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:16.662984  412953 type.go:168] "Request Body" body=""
	I1210 07:46:16.663080  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:16.663419  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:17.162974  412953 type.go:168] "Request Body" body=""
	I1210 07:46:17.163061  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:17.163322  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:17.163365  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:17.663052  412953 type.go:168] "Request Body" body=""
	I1210 07:46:17.663135  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:17.663464  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:18.163305  412953 type.go:168] "Request Body" body=""
	I1210 07:46:18.163382  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:18.163729  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:18.663345  412953 type.go:168] "Request Body" body=""
	I1210 07:46:18.663418  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:18.663680  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:19.163480  412953 type.go:168] "Request Body" body=""
	I1210 07:46:19.163553  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:19.163943  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:19.163995  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:19.663628  412953 type.go:168] "Request Body" body=""
	I1210 07:46:19.663701  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:19.664020  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:20.162792  412953 type.go:168] "Request Body" body=""
	I1210 07:46:20.162868  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:20.163216  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:20.662994  412953 type.go:168] "Request Body" body=""
	I1210 07:46:20.663084  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:20.663424  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:21.162961  412953 type.go:168] "Request Body" body=""
	I1210 07:46:21.163061  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:21.163356  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:21.662751  412953 type.go:168] "Request Body" body=""
	I1210 07:46:21.662832  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:21.663141  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:21.663198  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:22.162889  412953 type.go:168] "Request Body" body=""
	I1210 07:46:22.162971  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:22.163363  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:22.663102  412953 type.go:168] "Request Body" body=""
	I1210 07:46:22.663178  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:22.663485  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:23.163205  412953 type.go:168] "Request Body" body=""
	I1210 07:46:23.163277  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:23.163529  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:23.663272  412953 type.go:168] "Request Body" body=""
	I1210 07:46:23.663341  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:23.663672  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:23.663725  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:24.163522  412953 type.go:168] "Request Body" body=""
	I1210 07:46:24.163596  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:24.163927  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:24.662937  412953 type.go:168] "Request Body" body=""
	I1210 07:46:24.663007  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:24.663348  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:25.162766  412953 type.go:168] "Request Body" body=""
	I1210 07:46:25.162842  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:25.163192  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:25.662762  412953 type.go:168] "Request Body" body=""
	I1210 07:46:25.662836  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:25.663200  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:26.162716  412953 type.go:168] "Request Body" body=""
	I1210 07:46:26.162788  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:26.163132  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:26.163196  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:26.662779  412953 type.go:168] "Request Body" body=""
	I1210 07:46:26.662852  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:26.663214  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:27.162966  412953 type.go:168] "Request Body" body=""
	I1210 07:46:27.163059  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:27.163400  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:27.663107  412953 type.go:168] "Request Body" body=""
	I1210 07:46:27.663178  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:27.663429  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:28.162791  412953 type.go:168] "Request Body" body=""
	I1210 07:46:28.162875  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:28.163237  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:28.163291  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:28.662824  412953 type.go:168] "Request Body" body=""
	I1210 07:46:28.662900  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:28.663270  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:29.162964  412953 type.go:168] "Request Body" body=""
	I1210 07:46:29.163094  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:29.163427  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:29.662778  412953 type.go:168] "Request Body" body=""
	I1210 07:46:29.662855  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:29.663206  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:30.162949  412953 type.go:168] "Request Body" body=""
	I1210 07:46:30.163043  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:30.163442  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:30.163519  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:30.663178  412953 type.go:168] "Request Body" body=""
	I1210 07:46:30.663251  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:30.663507  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:31.162779  412953 type.go:168] "Request Body" body=""
	I1210 07:46:31.162861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:31.163241  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:31.662779  412953 type.go:168] "Request Body" body=""
	I1210 07:46:31.662863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:31.663242  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:32.163585  412953 type.go:168] "Request Body" body=""
	I1210 07:46:32.163652  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:32.163904  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:32.163946  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:32.663710  412953 type.go:168] "Request Body" body=""
	I1210 07:46:32.663785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:32.664115  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:33.162721  412953 type.go:168] "Request Body" body=""
	I1210 07:46:33.162805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:33.163126  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:33.662701  412953 type.go:168] "Request Body" body=""
	I1210 07:46:33.662773  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:33.663075  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:34.162808  412953 type.go:168] "Request Body" body=""
	I1210 07:46:34.162883  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:34.163215  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:34.663030  412953 type.go:168] "Request Body" body=""
	I1210 07:46:34.663135  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:34.663479  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:34.663544  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:35.163296  412953 type.go:168] "Request Body" body=""
	I1210 07:46:35.163366  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:35.163632  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:35.663541  412953 type.go:168] "Request Body" body=""
	I1210 07:46:35.663615  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:35.663930  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:36.162688  412953 type.go:168] "Request Body" body=""
	I1210 07:46:36.162762  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:36.163076  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:36.662753  412953 type.go:168] "Request Body" body=""
	I1210 07:46:36.662826  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:36.663102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:37.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:46:37.162863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:37.163224  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:37.163284  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:37.662950  412953 type.go:168] "Request Body" body=""
	I1210 07:46:37.663050  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:37.663385  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:38.163090  412953 type.go:168] "Request Body" body=""
	I1210 07:46:38.163159  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:38.163458  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:38.662777  412953 type.go:168] "Request Body" body=""
	I1210 07:46:38.662848  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:38.663155  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:39.162756  412953 type.go:168] "Request Body" body=""
	I1210 07:46:39.162851  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:39.163199  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:39.662727  412953 type.go:168] "Request Body" body=""
	I1210 07:46:39.662805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:39.663239  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:39.663299  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:40.162968  412953 type.go:168] "Request Body" body=""
	I1210 07:46:40.163066  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:40.163428  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:40.663175  412953 type.go:168] "Request Body" body=""
	I1210 07:46:40.663247  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:40.663570  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:41.163330  412953 type.go:168] "Request Body" body=""
	I1210 07:46:41.163398  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:41.163670  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:41.663442  412953 type.go:168] "Request Body" body=""
	I1210 07:46:41.663514  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:41.663828  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:41.663885  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:42.163612  412953 type.go:168] "Request Body" body=""
	I1210 07:46:42.163698  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:42.164038  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:42.662671  412953 type.go:168] "Request Body" body=""
	I1210 07:46:42.662751  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:42.663040  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:43.162772  412953 type.go:168] "Request Body" body=""
	I1210 07:46:43.162852  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:43.163223  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:43.662929  412953 type.go:168] "Request Body" body=""
	I1210 07:46:43.663035  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:43.663349  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:44.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:46:44.162784  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:44.163074  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:44.163124  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:44.663067  412953 type.go:168] "Request Body" body=""
	I1210 07:46:44.663140  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:44.663477  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:45.163353  412953 type.go:168] "Request Body" body=""
	I1210 07:46:45.163451  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:45.163846  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:45.662750  412953 type.go:168] "Request Body" body=""
	I1210 07:46:45.662825  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:45.663122  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:46.162781  412953 type.go:168] "Request Body" body=""
	I1210 07:46:46.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:46.163209  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:46.163281  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:46.662789  412953 type.go:168] "Request Body" body=""
	I1210 07:46:46.662869  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:46.663244  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:47.162966  412953 type.go:168] "Request Body" body=""
	I1210 07:46:47.163051  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:47.163320  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:47.662754  412953 type.go:168] "Request Body" body=""
	I1210 07:46:47.662828  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:47.663189  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:48.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:46:48.162860  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:48.163220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:48.662721  412953 type.go:168] "Request Body" body=""
	I1210 07:46:48.662793  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:48.663100  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:48.663152  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:49.162765  412953 type.go:168] "Request Body" body=""
	I1210 07:46:49.162862  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:49.163255  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:49.662773  412953 type.go:168] "Request Body" body=""
	I1210 07:46:49.662850  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:49.663184  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:50.162728  412953 type.go:168] "Request Body" body=""
	I1210 07:46:50.162800  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:50.163110  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:50.662840  412953 type.go:168] "Request Body" body=""
	I1210 07:46:50.662940  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:50.663293  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:50.663354  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:51.163080  412953 type.go:168] "Request Body" body=""
	I1210 07:46:51.163164  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:51.163485  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:51.663128  412953 type.go:168] "Request Body" body=""
	I1210 07:46:51.663202  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:51.663559  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:52.163395  412953 type.go:168] "Request Body" body=""
	I1210 07:46:52.163475  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:52.163785  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:52.663590  412953 type.go:168] "Request Body" body=""
	I1210 07:46:52.663677  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:52.664017  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:52.664072  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:53.162668  412953 type.go:168] "Request Body" body=""
	I1210 07:46:53.162748  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:53.163064  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:53.662759  412953 type.go:168] "Request Body" body=""
	I1210 07:46:53.662839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:53.663165  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:54.162852  412953 type.go:168] "Request Body" body=""
	I1210 07:46:54.162929  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:54.163260  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:54.663174  412953 type.go:168] "Request Body" body=""
	I1210 07:46:54.663244  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:54.663519  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:55.163370  412953 type.go:168] "Request Body" body=""
	I1210 07:46:55.163453  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:55.163790  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:55.163840  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:55.663591  412953 type.go:168] "Request Body" body=""
	I1210 07:46:55.663675  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:55.664032  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:56.162718  412953 type.go:168] "Request Body" body=""
	I1210 07:46:56.162787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:56.163127  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:56.662776  412953 type.go:168] "Request Body" body=""
	I1210 07:46:56.662857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:56.663223  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:57.162920  412953 type.go:168] "Request Body" body=""
	I1210 07:46:57.162997  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:57.163350  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:57.663044  412953 type.go:168] "Request Body" body=""
	I1210 07:46:57.663119  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:57.663437  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:57.663491  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:58.162764  412953 type.go:168] "Request Body" body=""
	I1210 07:46:58.162842  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:58.163207  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:58.662916  412953 type.go:168] "Request Body" body=""
	I1210 07:46:58.662998  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:58.663346  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:59.162707  412953 type.go:168] "Request Body" body=""
	I1210 07:46:59.162786  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:59.163086  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:59.662772  412953 type.go:168] "Request Body" body=""
	I1210 07:46:59.662861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:59.663202  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:00.162842  412953 type.go:168] "Request Body" body=""
	I1210 07:47:00.162937  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:00.163453  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:00.163526  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:00.663007  412953 type.go:168] "Request Body" body=""
	I1210 07:47:00.663098  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:00.663417  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:01.162776  412953 type.go:168] "Request Body" body=""
	I1210 07:47:01.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:01.163244  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:01.662972  412953 type.go:168] "Request Body" body=""
	I1210 07:47:01.663073  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:01.663435  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:02.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:47:02.162838  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:02.163124  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:02.662798  412953 type.go:168] "Request Body" body=""
	I1210 07:47:02.662880  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:02.663234  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:02.663292  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:03.162956  412953 type.go:168] "Request Body" body=""
	I1210 07:47:03.163056  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:03.163409  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:03.663107  412953 type.go:168] "Request Body" body=""
	I1210 07:47:03.663180  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:03.663591  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:04.163359  412953 type.go:168] "Request Body" body=""
	I1210 07:47:04.163439  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:04.163771  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:04.662805  412953 type.go:168] "Request Body" body=""
	I1210 07:47:04.662883  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:04.663237  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:05.162670  412953 type.go:168] "Request Body" body=""
	I1210 07:47:05.162740  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:05.163054  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:05.163104  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:05.662994  412953 type.go:168] "Request Body" body=""
	I1210 07:47:05.663093  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:05.663427  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:06.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:47:06.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:06.163239  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:06.662861  412953 type.go:168] "Request Body" body=""
	I1210 07:47:06.662927  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:06.663190  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:07.162855  412953 type.go:168] "Request Body" body=""
	I1210 07:47:07.162985  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:07.163313  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:07.163372  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:07.662758  412953 type.go:168] "Request Body" body=""
	I1210 07:47:07.662832  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:07.663175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:08.162737  412953 type.go:168] "Request Body" body=""
	I1210 07:47:08.162808  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:08.163134  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:08.662744  412953 type.go:168] "Request Body" body=""
	I1210 07:47:08.662821  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:08.663156  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:09.162866  412953 type.go:168] "Request Body" body=""
	I1210 07:47:09.162942  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:09.163277  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:09.662758  412953 type.go:168] "Request Body" body=""
	I1210 07:47:09.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:09.663155  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:09.663202  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:10.162922  412953 type.go:168] "Request Body" body=""
	I1210 07:47:10.163005  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:10.163368  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:10.662972  412953 type.go:168] "Request Body" body=""
	I1210 07:47:10.663067  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:10.663391  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:11.163075  412953 type.go:168] "Request Body" body=""
	I1210 07:47:11.163169  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:11.163529  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:11.663298  412953 type.go:168] "Request Body" body=""
	I1210 07:47:11.663381  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:11.663711  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:11.663770  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:12.163545  412953 type.go:168] "Request Body" body=""
	I1210 07:47:12.163626  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:12.163962  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:12.663589  412953 type.go:168] "Request Body" body=""
	I1210 07:47:12.663663  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:12.663928  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:13.163691  412953 type.go:168] "Request Body" body=""
	I1210 07:47:13.163763  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:13.164095  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:13.662728  412953 type.go:168] "Request Body" body=""
	I1210 07:47:13.662802  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:13.663152  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:14.162755  412953 type.go:168] "Request Body" body=""
	I1210 07:47:14.162827  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:14.163128  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:14.163174  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:14.663040  412953 type.go:168] "Request Body" body=""
	I1210 07:47:14.663122  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:14.663453  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:15.163172  412953 type.go:168] "Request Body" body=""
	I1210 07:47:15.163245  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:15.163583  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:15.663556  412953 type.go:168] "Request Body" body=""
	I1210 07:47:15.663626  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:15.663897  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:16.162681  412953 type.go:168] "Request Body" body=""
	I1210 07:47:16.162760  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:16.163086  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:16.662812  412953 type.go:168] "Request Body" body=""
	I1210 07:47:16.662888  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:16.663240  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:16.663299  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:17.162710  412953 type.go:168] "Request Body" body=""
	I1210 07:47:17.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:17.163065  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:17.662764  412953 type.go:168] "Request Body" body=""
	I1210 07:47:17.662843  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:17.663184  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:18.162904  412953 type.go:168] "Request Body" body=""
	I1210 07:47:18.162985  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:18.163341  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:18.662709  412953 type.go:168] "Request Body" body=""
	I1210 07:47:18.662779  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:18.663072  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:19.162752  412953 type.go:168] "Request Body" body=""
	I1210 07:47:19.162825  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:19.163160  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:19.163217  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:19.662756  412953 type.go:168] "Request Body" body=""
	I1210 07:47:19.662831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:19.663169  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:20.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:47:20.162785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:20.163102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:20.662785  412953 type.go:168] "Request Body" body=""
	I1210 07:47:20.662863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:20.663210  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:21.162946  412953 type.go:168] "Request Body" body=""
	I1210 07:47:21.163041  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:21.163356  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:21.163416  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:21.662705  412953 type.go:168] "Request Body" body=""
	I1210 07:47:21.662777  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:21.663055  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:22.162766  412953 type.go:168] "Request Body" body=""
	I1210 07:47:22.162839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:22.163199  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:22.662780  412953 type.go:168] "Request Body" body=""
	I1210 07:47:22.662854  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:22.663212  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:23.163429  412953 type.go:168] "Request Body" body=""
	I1210 07:47:23.163503  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:23.163746  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:23.163785  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:23.663518  412953 type.go:168] "Request Body" body=""
	I1210 07:47:23.663593  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:23.663930  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:24.163583  412953 type.go:168] "Request Body" body=""
	I1210 07:47:24.163661  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:24.164012  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:24.662895  412953 type.go:168] "Request Body" body=""
	I1210 07:47:24.662963  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:24.663238  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:25.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:47:25.162856  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:25.163189  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:25.662776  412953 type.go:168] "Request Body" body=""
	I1210 07:47:25.662854  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:25.663185  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:25.663235  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:26.162705  412953 type.go:168] "Request Body" body=""
	I1210 07:47:26.162780  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:26.163095  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:26.662818  412953 type.go:168] "Request Body" body=""
	I1210 07:47:26.662889  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:26.663264  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:27.162833  412953 type.go:168] "Request Body" body=""
	I1210 07:47:27.162913  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:27.163219  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:27.662705  412953 type.go:168] "Request Body" body=""
	I1210 07:47:27.662774  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:27.663040  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:28.162741  412953 type.go:168] "Request Body" body=""
	I1210 07:47:28.162822  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:28.163180  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:28.163235  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:28.662765  412953 type.go:168] "Request Body" body=""
	I1210 07:47:28.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:28.663191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:29.163595  412953 type.go:168] "Request Body" body=""
	I1210 07:47:29.163681  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:29.163938  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:29.663716  412953 type.go:168] "Request Body" body=""
	I1210 07:47:29.663791  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:29.664120  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:30.162802  412953 type.go:168] "Request Body" body=""
	I1210 07:47:30.162881  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:30.163228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:30.163277  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:30.662975  412953 type.go:168] "Request Body" body=""
	I1210 07:47:30.663060  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:30.663309  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:31.162750  412953 type.go:168] "Request Body" body=""
	I1210 07:47:31.162823  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:31.163180  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:31.662903  412953 type.go:168] "Request Body" body=""
	I1210 07:47:31.662983  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:31.663348  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:32.163059  412953 type.go:168] "Request Body" body=""
	I1210 07:47:32.163151  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:32.163486  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:32.163538  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:32.663271  412953 type.go:168] "Request Body" body=""
	I1210 07:47:32.663354  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:32.663709  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:33.163521  412953 type.go:168] "Request Body" body=""
	I1210 07:47:33.163606  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:33.163923  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:33.663552  412953 type.go:168] "Request Body" body=""
	I1210 07:47:33.663621  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:33.663890  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:34.162672  412953 type.go:168] "Request Body" body=""
	I1210 07:47:34.162756  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:34.163144  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:34.662973  412953 type.go:168] "Request Body" body=""
	I1210 07:47:34.663067  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:34.663388  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:34.663446  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:35.163091  412953 type.go:168] "Request Body" body=""
	I1210 07:47:35.163163  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:35.163426  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:35.663452  412953 type.go:168] "Request Body" body=""
	I1210 07:47:35.663527  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:35.663843  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:36.163648  412953 type.go:168] "Request Body" body=""
	I1210 07:47:36.163726  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:36.164083  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:36.662824  412953 type.go:168] "Request Body" body=""
	I1210 07:47:36.662911  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:36.663202  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:37.162888  412953 type.go:168] "Request Body" body=""
	I1210 07:47:37.162961  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:37.163292  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:37.163351  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:37.663060  412953 type.go:168] "Request Body" body=""
	I1210 07:47:37.663139  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:37.663481  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:38.163223  412953 type.go:168] "Request Body" body=""
	I1210 07:47:38.163297  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:38.163629  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:38.663452  412953 type.go:168] "Request Body" body=""
	I1210 07:47:38.663533  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:38.663884  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:39.163604  412953 type.go:168] "Request Body" body=""
	I1210 07:47:39.163682  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:39.164033  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:39.164085  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:39.662709  412953 type.go:168] "Request Body" body=""
	I1210 07:47:39.662798  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:39.663102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:40.162773  412953 type.go:168] "Request Body" body=""
	I1210 07:47:40.162850  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:40.163206  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:40.663034  412953 type.go:168] "Request Body" body=""
	I1210 07:47:40.663108  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:40.663451  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:41.162787  412953 type.go:168] "Request Body" body=""
	I1210 07:47:41.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:41.163166  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:41.662816  412953 type.go:168] "Request Body" body=""
	I1210 07:47:41.662924  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:41.663355  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:41.663428  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:42.162833  412953 type.go:168] "Request Body" body=""
	I1210 07:47:42.162931  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:42.163383  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:42.662719  412953 type.go:168] "Request Body" body=""
	I1210 07:47:42.662787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:42.663059  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:43.162773  412953 type.go:168] "Request Body" body=""
	I1210 07:47:43.162848  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:43.163226  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:43.662792  412953 type.go:168] "Request Body" body=""
	I1210 07:47:43.662865  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:43.663241  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:44.162732  412953 type.go:168] "Request Body" body=""
	I1210 07:47:44.162801  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:44.163080  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:44.163121  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:44.663044  412953 type.go:168] "Request Body" body=""
	I1210 07:47:44.663116  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:44.663433  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:45.163203  412953 type.go:168] "Request Body" body=""
	I1210 07:47:45.163296  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:45.163726  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:45.662709  412953 type.go:168] "Request Body" body=""
	I1210 07:47:45.662794  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:45.663175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:46.162894  412953 type.go:168] "Request Body" body=""
	I1210 07:47:46.162972  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:46.163291  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:46.163350  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:46.662780  412953 type.go:168] "Request Body" body=""
	I1210 07:47:46.662853  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:46.663154  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:47.162673  412953 type.go:168] "Request Body" body=""
	I1210 07:47:47.162741  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:47.163000  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:47.662781  412953 type.go:168] "Request Body" body=""
	I1210 07:47:47.662882  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:47.663282  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:48.163055  412953 type.go:168] "Request Body" body=""
	I1210 07:47:48.163152  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:48.163450  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:48.163501  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:48.663148  412953 type.go:168] "Request Body" body=""
	I1210 07:47:48.663224  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:48.663521  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:49.163373  412953 type.go:168] "Request Body" body=""
	I1210 07:47:49.163457  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:49.163828  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:49.663672  412953 type.go:168] "Request Body" body=""
	I1210 07:47:49.663767  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:49.664111  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:50.163649  412953 type.go:168] "Request Body" body=""
	I1210 07:47:50.163722  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:50.163974  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:50.164024  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:50.662959  412953 type.go:168] "Request Body" body=""
	I1210 07:47:50.663053  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:50.663390  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:51.163109  412953 type.go:168] "Request Body" body=""
	I1210 07:47:51.163226  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:51.163573  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:51.663350  412953 type.go:168] "Request Body" body=""
	I1210 07:47:51.663424  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:51.663681  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:52.163449  412953 type.go:168] "Request Body" body=""
	I1210 07:47:52.163526  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:52.163814  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:52.663598  412953 type.go:168] "Request Body" body=""
	I1210 07:47:52.663677  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:52.664036  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:52.664093  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:53.162754  412953 type.go:168] "Request Body" body=""
	I1210 07:47:53.162833  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:53.163184  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:53.662766  412953 type.go:168] "Request Body" body=""
	I1210 07:47:53.662839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:53.663209  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:54.162947  412953 type.go:168] "Request Body" body=""
	I1210 07:47:54.163051  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:54.163402  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:54.663120  412953 type.go:168] "Request Body" body=""
	I1210 07:47:54.663194  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:54.663486  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:55.163337  412953 type.go:168] "Request Body" body=""
	I1210 07:47:55.163413  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:55.163797  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:55.163853  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:55.663646  412953 type.go:168] "Request Body" body=""
	I1210 07:47:55.663720  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:55.664034  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:56.162701  412953 type.go:168] "Request Body" body=""
	I1210 07:47:56.162774  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:56.163098  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:56.662775  412953 type.go:168] "Request Body" body=""
	I1210 07:47:56.662859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:56.663186  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:57.162786  412953 type.go:168] "Request Body" body=""
	I1210 07:47:57.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:57.163228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:57.663510  412953 type.go:168] "Request Body" body=""
	I1210 07:47:57.663581  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:57.663872  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:57.663929  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:58.163725  412953 type.go:168] "Request Body" body=""
	I1210 07:47:58.163813  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:58.164168  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:58.662866  412953 type.go:168] "Request Body" body=""
	I1210 07:47:58.662937  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:58.663291  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:59.163430  412953 type.go:168] "Request Body" body=""
	I1210 07:47:59.163502  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:59.163755  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:59.663519  412953 type.go:168] "Request Body" body=""
	I1210 07:47:59.663597  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:59.663931  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:59.663987  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:00.163719  412953 type.go:168] "Request Body" body=""
	I1210 07:48:00.163813  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:00.164225  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:00.663315  412953 type.go:168] "Request Body" body=""
	I1210 07:48:00.663399  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:00.663675  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:01.163489  412953 type.go:168] "Request Body" body=""
	I1210 07:48:01.163567  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:01.163893  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:01.663702  412953 type.go:168] "Request Body" body=""
	I1210 07:48:01.663781  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:01.664135  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:01.664198  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:02.162817  412953 type.go:168] "Request Body" body=""
	I1210 07:48:02.162886  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:02.163184  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:02.662884  412953 type.go:168] "Request Body" body=""
	I1210 07:48:02.662971  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:02.663338  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:03.162784  412953 type.go:168] "Request Body" body=""
	I1210 07:48:03.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:03.163213  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:03.662759  412953 type.go:168] "Request Body" body=""
	I1210 07:48:03.662836  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:03.663120  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:04.162778  412953 type.go:168] "Request Body" body=""
	I1210 07:48:04.162852  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:04.163191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:04.163249  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:04.663059  412953 type.go:168] "Request Body" body=""
	I1210 07:48:04.663141  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:04.663463  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:05.162715  412953 type.go:168] "Request Body" body=""
	I1210 07:48:05.162795  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:05.163131  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:05.663057  412953 type.go:168] "Request Body" body=""
	I1210 07:48:05.663143  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:05.663537  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:06.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:48:06.162864  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:06.163194  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:06.662713  412953 type.go:168] "Request Body" body=""
	I1210 07:48:06.662786  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:06.663074  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:06.663118  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:07.162740  412953 type.go:168] "Request Body" body=""
	I1210 07:48:07.162814  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:07.163183  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:07.662797  412953 type.go:168] "Request Body" body=""
	I1210 07:48:07.662880  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:07.663236  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:08.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:48:08.162871  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:08.163193  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:08.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:48:08.662859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:08.663191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:08.663248  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:09.162797  412953 type.go:168] "Request Body" body=""
	I1210 07:48:09.162876  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:09.163240  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:09.662795  412953 type.go:168] "Request Body" body=""
	I1210 07:48:09.662869  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:09.663133  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:10.162749  412953 type.go:168] "Request Body" body=""
	I1210 07:48:10.162821  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:10.163157  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:10.662788  412953 type.go:168] "Request Body" body=""
	I1210 07:48:10.662861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:10.663447  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:10.663517  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:11.163125  412953 type.go:168] "Request Body" body=""
	I1210 07:48:11.163197  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:11.163452  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:11.663284  412953 type.go:168] "Request Body" body=""
	I1210 07:48:11.663356  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:11.663683  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:12.163495  412953 type.go:168] "Request Body" body=""
	I1210 07:48:12.163578  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:12.163924  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:12.663702  412953 type.go:168] "Request Body" body=""
	I1210 07:48:12.663773  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:12.664091  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:12.664142  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:13.162781  412953 type.go:168] "Request Body" body=""
	I1210 07:48:13.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:13.163174  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:13.662779  412953 type.go:168] "Request Body" body=""
	I1210 07:48:13.662857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:13.663203  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:14.162879  412953 type.go:168] "Request Body" body=""
	I1210 07:48:14.162951  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:14.163288  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:14.663062  412953 type.go:168] "Request Body" body=""
	I1210 07:48:14.663137  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:14.663477  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:15.163290  412953 type.go:168] "Request Body" body=""
	I1210 07:48:15.163371  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:15.163703  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:15.163759  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:15.662995  412953 type.go:168] "Request Body" body=""
	I1210 07:48:15.663090  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:15.663504  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:16.163293  412953 type.go:168] "Request Body" body=""
	I1210 07:48:16.163371  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:16.163704  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:16.663391  412953 type.go:168] "Request Body" body=""
	I1210 07:48:16.663468  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:16.663824  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:17.163573  412953 type.go:168] "Request Body" body=""
	I1210 07:48:17.163645  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:17.163900  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:17.163949  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:17.662654  412953 type.go:168] "Request Body" body=""
	I1210 07:48:17.662730  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:17.663090  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:18.162799  412953 type.go:168] "Request Body" body=""
	I1210 07:48:18.162871  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:18.163220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:18.662707  412953 type.go:168] "Request Body" body=""
	I1210 07:48:18.662777  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:18.663073  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:19.162733  412953 type.go:168] "Request Body" body=""
	I1210 07:48:19.162808  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:19.163121  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:19.662805  412953 type.go:168] "Request Body" body=""
	I1210 07:48:19.662883  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:19.663227  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:19.663286  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:20.162738  412953 type.go:168] "Request Body" body=""
	I1210 07:48:20.162812  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:20.163160  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:20.662944  412953 type.go:168] "Request Body" body=""
	I1210 07:48:20.663033  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:20.663345  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:21.162798  412953 type.go:168] "Request Body" body=""
	I1210 07:48:21.162875  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:21.163231  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:21.662783  412953 type.go:168] "Request Body" body=""
	I1210 07:48:21.662865  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:21.663168  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:22.162769  412953 type.go:168] "Request Body" body=""
	I1210 07:48:22.162847  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:22.163187  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:22.163245  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:22.662786  412953 type.go:168] "Request Body" body=""
	I1210 07:48:22.662860  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:22.663219  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:23.162659  412953 type.go:168] "Request Body" body=""
	I1210 07:48:23.162735  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:23.163055  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:23.662748  412953 type.go:168] "Request Body" body=""
	I1210 07:48:23.662823  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:23.663195  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:24.162905  412953 type.go:168] "Request Body" body=""
	I1210 07:48:24.162979  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:24.163329  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:24.163388  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:24.662665  412953 type.go:168] "Request Body" body=""
	I1210 07:48:24.662746  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:24.662999  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:25.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:48:25.162867  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:25.163229  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:25.662989  412953 type.go:168] "Request Body" body=""
	I1210 07:48:25.663082  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:25.663400  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:26.162934  412953 type.go:168] "Request Body" body=""
	I1210 07:48:26.163003  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:26.163282  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:26.662757  412953 type.go:168] "Request Body" body=""
	I1210 07:48:26.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:26.663211  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:26.663268  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:27.162792  412953 type.go:168] "Request Body" body=""
	I1210 07:48:27.162873  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:27.163238  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:27.662787  412953 type.go:168] "Request Body" body=""
	I1210 07:48:27.662857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:27.663143  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:28.162822  412953 type.go:168] "Request Body" body=""
	I1210 07:48:28.162896  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:28.163266  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:28.662851  412953 type.go:168] "Request Body" body=""
	I1210 07:48:28.662926  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:28.663274  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:28.663332  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:29.162716  412953 type.go:168] "Request Body" body=""
	I1210 07:48:29.162788  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:29.163115  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:29.662940  412953 type.go:168] "Request Body" body=""
	I1210 07:48:29.663035  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:29.663342  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:30.162806  412953 type.go:168] "Request Body" body=""
	I1210 07:48:30.162885  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:30.163215  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:30.663057  412953 type.go:168] "Request Body" body=""
	I1210 07:48:30.663131  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:30.663453  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:30.663521  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:31.162771  412953 type.go:168] "Request Body" body=""
	I1210 07:48:31.162848  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:31.163188  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:31.662737  412953 type.go:168] "Request Body" body=""
	I1210 07:48:31.662810  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:31.663134  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:32.162725  412953 type.go:168] "Request Body" body=""
	I1210 07:48:32.162795  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:32.163156  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:32.662781  412953 type.go:168] "Request Body" body=""
	I1210 07:48:32.662857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:32.663205  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:33.162800  412953 type.go:168] "Request Body" body=""
	I1210 07:48:33.162878  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:33.163211  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:33.163260  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:33.662739  412953 type.go:168] "Request Body" body=""
	I1210 07:48:33.662809  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:33.663092  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:34.162756  412953 type.go:168] "Request Body" body=""
	I1210 07:48:34.162829  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:34.163174  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:34.663106  412953 type.go:168] "Request Body" body=""
	I1210 07:48:34.663176  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:34.663513  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:35.163270  412953 type.go:168] "Request Body" body=""
	I1210 07:48:35.163336  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:35.163605  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:35.163645  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:35.663599  412953 type.go:168] "Request Body" body=""
	I1210 07:48:35.663677  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:35.663980  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:36.162698  412953 type.go:168] "Request Body" body=""
	I1210 07:48:36.162778  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:36.163088  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:36.662771  412953 type.go:168] "Request Body" body=""
	I1210 07:48:36.662845  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:36.663185  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:37.162797  412953 type.go:168] "Request Body" body=""
	I1210 07:48:37.162873  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:37.163228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:37.662944  412953 type.go:168] "Request Body" body=""
	I1210 07:48:37.663046  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:37.663363  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:37.663421  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:38.162719  412953 type.go:168] "Request Body" body=""
	I1210 07:48:38.162806  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:38.163087  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:38.662798  412953 type.go:168] "Request Body" body=""
	I1210 07:48:38.662885  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:38.663268  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:39.162981  412953 type.go:168] "Request Body" body=""
	I1210 07:48:39.163079  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:39.163418  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:39.662892  412953 type.go:168] "Request Body" body=""
	I1210 07:48:39.662959  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:39.663228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:40.162925  412953 type.go:168] "Request Body" body=""
	I1210 07:48:40.163032  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:40.163374  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:40.163433  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:40.663119  412953 type.go:168] "Request Body" body=""
	I1210 07:48:40.663199  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:40.663541  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:41.163268  412953 type.go:168] "Request Body" body=""
	I1210 07:48:41.163348  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:41.163613  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:41.663378  412953 type.go:168] "Request Body" body=""
	I1210 07:48:41.663451  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:41.663768  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:42.163629  412953 type.go:168] "Request Body" body=""
	I1210 07:48:42.163723  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:42.164102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:42.164166  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:42.662641  412953 type.go:168] "Request Body" body=""
	I1210 07:48:42.662719  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:42.663050  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:43.162792  412953 type.go:168] "Request Body" body=""
	I1210 07:48:43.162873  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:43.163217  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:43.662974  412953 type.go:168] "Request Body" body=""
	I1210 07:48:43.663077  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:43.663400  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:44.162713  412953 type.go:168] "Request Body" body=""
	I1210 07:48:44.162788  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:44.163107  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:44.663097  412953 type.go:168] "Request Body" body=""
	I1210 07:48:44.663173  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:44.663538  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:44.663598  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:45.163413  412953 type.go:168] "Request Body" body=""
	I1210 07:48:45.163521  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:45.163972  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:45.662711  412953 type.go:168] "Request Body" body=""
	I1210 07:48:45.662787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:45.663074  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:46.162828  412953 type.go:168] "Request Body" body=""
	I1210 07:48:46.162907  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:46.163281  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:46.663004  412953 type.go:168] "Request Body" body=""
	I1210 07:48:46.663101  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:46.663416  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:47.162720  412953 type.go:168] "Request Body" body=""
	I1210 07:48:47.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:47.163096  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:47.163144  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:47.662695  412953 type.go:168] "Request Body" body=""
	I1210 07:48:47.662772  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:47.663201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:48.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:48:48.162861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:48.163200  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:48.662706  412953 type.go:168] "Request Body" body=""
	I1210 07:48:48.662780  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:48.663092  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:49.162753  412953 type.go:168] "Request Body" body=""
	I1210 07:48:49.162836  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:49.163183  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:49.163230  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:49.662751  412953 type.go:168] "Request Body" body=""
	I1210 07:48:49.662831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:49.663185  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:50.162871  412953 type.go:168] "Request Body" body=""
	I1210 07:48:50.162945  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:50.163286  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:50.663056  412953 type.go:168] "Request Body" body=""
	I1210 07:48:50.663136  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:50.663481  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:51.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:48:51.162872  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:51.163240  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:51.163304  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:51.662756  412953 type.go:168] "Request Body" body=""
	I1210 07:48:51.662828  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:51.663133  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:52.162821  412953 type.go:168] "Request Body" body=""
	I1210 07:48:52.162901  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:52.163251  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:52.662959  412953 type.go:168] "Request Body" body=""
	I1210 07:48:52.663046  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:52.663381  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:53.163082  412953 type.go:168] "Request Body" body=""
	I1210 07:48:53.163172  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:53.163460  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:53.163528  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:53.663234  412953 type.go:168] "Request Body" body=""
	I1210 07:48:53.663307  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:53.663639  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:54.163275  412953 type.go:168] "Request Body" body=""
	I1210 07:48:54.163349  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:54.163662  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:54.663667  412953 type.go:168] "Request Body" body=""
	I1210 07:48:54.663741  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:54.663998  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:55.162712  412953 type.go:168] "Request Body" body=""
	I1210 07:48:55.162787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:55.163138  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:55.663042  412953 type.go:168] "Request Body" body=""
	I1210 07:48:55.663118  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:55.663430  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:55.663490  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:56.163120  412953 type.go:168] "Request Body" body=""
	I1210 07:48:56.163192  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:56.163444  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:56.663167  412953 type.go:168] "Request Body" body=""
	I1210 07:48:56.663247  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:56.663592  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:57.163414  412953 type.go:168] "Request Body" body=""
	I1210 07:48:57.163491  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:57.163814  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:57.663518  412953 type.go:168] "Request Body" body=""
	I1210 07:48:57.663593  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:57.663859  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:57.663899  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:58.163700  412953 type.go:168] "Request Body" body=""
	I1210 07:48:58.163775  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:58.164099  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:58.662798  412953 type.go:168] "Request Body" body=""
	I1210 07:48:58.662883  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:58.663263  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:59.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:48:59.162788  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:59.163088  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:59.662782  412953 type.go:168] "Request Body" body=""
	I1210 07:48:59.662866  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:59.663288  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:00.162837  412953 type.go:168] "Request Body" body=""
	I1210 07:49:00.162924  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:00.163302  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:00.163360  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:00.663222  412953 type.go:168] "Request Body" body=""
	I1210 07:49:00.663294  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:00.663553  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:01.163418  412953 type.go:168] "Request Body" body=""
	I1210 07:49:01.163505  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:01.163868  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:01.663688  412953 type.go:168] "Request Body" body=""
	I1210 07:49:01.663772  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:01.664129  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:02.162731  412953 type.go:168] "Request Body" body=""
	I1210 07:49:02.162806  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:02.163127  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:02.662770  412953 type.go:168] "Request Body" body=""
	I1210 07:49:02.662851  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:02.663217  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:02.663293  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:03.162959  412953 type.go:168] "Request Body" body=""
	I1210 07:49:03.163058  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:03.163386  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:03.663598  412953 type.go:168] "Request Body" body=""
	I1210 07:49:03.663674  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:03.663952  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:04.163712  412953 type.go:168] "Request Body" body=""
	I1210 07:49:04.163781  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:04.164105  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:04.663090  412953 type.go:168] "Request Body" body=""
	I1210 07:49:04.663166  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:04.663491  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:04.663550  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:05.163267  412953 type.go:168] "Request Body" body=""
	I1210 07:49:05.163335  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:05.163606  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:05.663590  412953 type.go:168] "Request Body" body=""
	I1210 07:49:05.663661  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:05.663988  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:06.162730  412953 type.go:168] "Request Body" body=""
	I1210 07:49:06.162809  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:06.163147  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:06.662702  412953 type.go:168] "Request Body" body=""
	I1210 07:49:06.662772  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:06.663090  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:07.162769  412953 type.go:168] "Request Body" body=""
	I1210 07:49:07.162842  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:07.163224  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:07.163284  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:07.662818  412953 type.go:168] "Request Body" body=""
	I1210 07:49:07.662894  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:07.663261  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:08.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:49:08.162860  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:08.163175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:08.662740  412953 type.go:168] "Request Body" body=""
	I1210 07:49:08.662817  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:08.663150  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:09.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:49:09.162845  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:09.163192  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:09.662749  412953 type.go:168] "Request Body" body=""
	I1210 07:49:09.662818  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:09.663142  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:09.663197  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:10.162737  412953 type.go:168] "Request Body" body=""
	I1210 07:49:10.162814  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:10.163183  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:10.662970  412953 type.go:168] "Request Body" body=""
	I1210 07:49:10.663057  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:10.663346  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:11.162920  412953 type.go:168] "Request Body" body=""
	I1210 07:49:11.162997  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:11.163332  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:11.662882  412953 type.go:168] "Request Body" body=""
	I1210 07:49:11.662959  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:11.663292  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:11.663351  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:12.163033  412953 type.go:168] "Request Body" body=""
	I1210 07:49:12.163107  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:12.163439  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:12.663269  412953 type.go:168] "Request Body" body=""
	I1210 07:49:12.663343  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:12.663599  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:13.163380  412953 type.go:168] "Request Body" body=""
	I1210 07:49:13.163458  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:13.163773  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:13.663450  412953 type.go:168] "Request Body" body=""
	I1210 07:49:13.663530  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:13.663864  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:13.663928  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:14.163662  412953 type.go:168] "Request Body" body=""
	I1210 07:49:14.163735  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:14.163995  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:14.663028  412953 type.go:168] "Request Body" body=""
	I1210 07:49:14.663102  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:14.663407  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:15.162751  412953 type.go:168] "Request Body" body=""
	I1210 07:49:15.162828  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:15.163189  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:15.662925  412953 type.go:168] "Request Body" body=""
	I1210 07:49:15.662994  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:15.663281  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:16.162756  412953 type.go:168] "Request Body" body=""
	I1210 07:49:16.162841  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:16.163429  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:16.163485  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:16.663183  412953 type.go:168] "Request Body" body=""
	I1210 07:49:16.663269  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:16.663642  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:17.163441  412953 type.go:168] "Request Body" body=""
	I1210 07:49:17.163510  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:17.163771  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:17.663499  412953 type.go:168] "Request Body" body=""
	I1210 07:49:17.663586  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:17.663939  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:18.163578  412953 type.go:168] "Request Body" body=""
	I1210 07:49:18.163666  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:18.164004  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:18.164061  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:18.662710  412953 type.go:168] "Request Body" body=""
	I1210 07:49:18.662778  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:18.663116  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:19.162846  412953 type.go:168] "Request Body" body=""
	I1210 07:49:19.162918  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:19.163279  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:19.662981  412953 type.go:168] "Request Body" body=""
	I1210 07:49:19.663079  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:19.663449  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:20.163115  412953 type.go:168] "Request Body" body=""
	I1210 07:49:20.163185  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:20.163485  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:20.662991  412953 type.go:168] "Request Body" body=""
	I1210 07:49:20.663087  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:20.663401  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:20.663459  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:21.163107  412953 type.go:168] "Request Body" body=""
	I1210 07:49:21.163187  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:21.163503  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:21.663312  412953 type.go:168] "Request Body" body=""
	I1210 07:49:21.663390  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:21.663648  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:22.163403  412953 type.go:168] "Request Body" body=""
	I1210 07:49:22.163478  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:22.163806  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:22.663492  412953 type.go:168] "Request Body" body=""
	I1210 07:49:22.663572  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:22.663904  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:22.663956  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:23.163518  412953 type.go:168] "Request Body" body=""
	I1210 07:49:23.163585  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:23.163836  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:23.663583  412953 type.go:168] "Request Body" body=""
	I1210 07:49:23.663658  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:23.663963  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:24.162717  412953 type.go:168] "Request Body" body=""
	I1210 07:49:24.162795  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:24.163164  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:24.663159  412953 type.go:168] "Request Body" body=""
	I1210 07:49:24.663235  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:24.663577  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:25.163338  412953 type.go:168] "Request Body" body=""
	I1210 07:49:25.163416  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:25.163741  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:25.163799  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:25.663562  412953 type.go:168] "Request Body" body=""
	I1210 07:49:25.663632  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:25.663965  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:26.162670  412953 type.go:168] "Request Body" body=""
	I1210 07:49:26.162742  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:26.162992  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:26.662713  412953 type.go:168] "Request Body" body=""
	I1210 07:49:26.662787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:26.663115  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:27.162844  412953 type.go:168] "Request Body" body=""
	I1210 07:49:27.162918  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:27.163264  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:27.662706  412953 type.go:168] "Request Body" body=""
	I1210 07:49:27.662777  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:27.663066  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:27.663116  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:28.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:49:28.162842  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:28.163197  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:28.662785  412953 type.go:168] "Request Body" body=""
	I1210 07:49:28.662870  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:28.663195  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:29.162663  412953 type.go:168] "Request Body" body=""
	I1210 07:49:29.162733  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:29.163065  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:29.662764  412953 type.go:168] "Request Body" body=""
	I1210 07:49:29.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:29.663210  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:29.663268  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:30.162965  412953 type.go:168] "Request Body" body=""
	I1210 07:49:30.163064  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:30.163387  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:30.663348  412953 type.go:168] "Request Body" body=""
	I1210 07:49:30.663415  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:30.663681  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:31.163525  412953 type.go:168] "Request Body" body=""
	I1210 07:49:31.163602  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:31.163937  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:31.662666  412953 type.go:168] "Request Body" body=""
	I1210 07:49:31.662743  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:31.663114  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:32.162802  412953 type.go:168] "Request Body" body=""
	I1210 07:49:32.162874  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:32.163205  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:32.163272  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:32.662932  412953 type.go:168] "Request Body" body=""
	I1210 07:49:32.663029  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:32.663371  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:33.162797  412953 type.go:168] "Request Body" body=""
	I1210 07:49:33.162872  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:33.163215  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:33.662690  412953 type.go:168] "Request Body" body=""
	I1210 07:49:33.662765  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:33.663040  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:34.162731  412953 type.go:168] "Request Body" body=""
	I1210 07:49:34.162811  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:34.163174  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:34.663120  412953 type.go:168] "Request Body" body=""
	I1210 07:49:34.663195  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:34.663560  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:34.663615  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:35.163345  412953 type.go:168] "Request Body" body=""
	I1210 07:49:35.163419  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:35.163698  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:35.663698  412953 type.go:168] "Request Body" body=""
	I1210 07:49:35.663775  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:35.664098  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:36.162816  412953 type.go:168] "Request Body" body=""
	I1210 07:49:36.162893  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:36.163232  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:36.662691  412953 type.go:168] "Request Body" body=""
	I1210 07:49:36.662766  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:36.663041  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:37.162771  412953 type.go:168] "Request Body" body=""
	I1210 07:49:37.162853  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:37.163191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:37.163246  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:37.662782  412953 type.go:168] "Request Body" body=""
	I1210 07:49:37.662863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:37.663181  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:38.162784  412953 type.go:168] "Request Body" body=""
	I1210 07:49:38.162866  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:38.163339  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:38.663026  412953 type.go:168] "Request Body" body=""
	I1210 07:49:38.663107  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:38.663399  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:39.162747  412953 type.go:168] "Request Body" body=""
	I1210 07:49:39.162828  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:39.163215  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:39.163277  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:39.662785  412953 type.go:168] "Request Body" body=""
	I1210 07:49:39.662860  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:39.663137  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:40.162787  412953 type.go:168] "Request Body" body=""
	I1210 07:49:40.162874  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:40.163221  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:40.663148  412953 type.go:168] "Request Body" body=""
	I1210 07:49:40.663236  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:40.663586  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:41.163335  412953 type.go:168] "Request Body" body=""
	I1210 07:49:41.163407  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:41.163738  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:41.163786  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:41.663533  412953 type.go:168] "Request Body" body=""
	I1210 07:49:41.663607  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:41.663906  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:42.163638  412953 type.go:168] "Request Body" body=""
	I1210 07:49:42.163723  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:42.164115  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:42.663519  412953 type.go:168] "Request Body" body=""
	I1210 07:49:42.663591  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:42.663894  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:43.163685  412953 type.go:168] "Request Body" body=""
	I1210 07:49:43.163760  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:43.164112  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:43.164166  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:43.662835  412953 type.go:168] "Request Body" body=""
	I1210 07:49:43.662918  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:43.663257  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:44.162716  412953 type.go:168] "Request Body" body=""
	I1210 07:49:44.162795  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:44.163078  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:44.663092  412953 type.go:168] "Request Body" body=""
	I1210 07:49:44.663175  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:44.663483  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:45.163326  412953 type.go:168] "Request Body" body=""
	I1210 07:49:45.163419  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:45.163779  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:45.662654  412953 type.go:168] "Request Body" body=""
	I1210 07:49:45.662729  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:45.663065  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:45.663117  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:46.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:49:46.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:46.163223  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:46.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:49:46.662868  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:46.663236  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:47.162751  412953 type.go:168] "Request Body" body=""
	I1210 07:49:47.162831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:47.163170  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:47.662766  412953 type.go:168] "Request Body" body=""
	I1210 07:49:47.662844  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:47.663198  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:47.663257  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:48.162790  412953 type.go:168] "Request Body" body=""
	I1210 07:49:48.162890  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:48.163263  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:48.662709  412953 type.go:168] "Request Body" body=""
	I1210 07:49:48.662776  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:48.663061  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:49.162802  412953 type.go:168] "Request Body" body=""
	I1210 07:49:49.162894  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:49.163340  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:49.662794  412953 type.go:168] "Request Body" body=""
	I1210 07:49:49.662875  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:49.663233  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:49.663295  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:50.163641  412953 type.go:168] "Request Body" body=""
	I1210 07:49:50.163727  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:50.163987  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:50.663073  412953 type.go:168] "Request Body" body=""
	I1210 07:49:50.663155  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:50.663511  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:51.163122  412953 type.go:168] "Request Body" body=""
	I1210 07:49:51.163206  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:51.163540  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:51.663260  412953 type.go:168] "Request Body" body=""
	I1210 07:49:51.663334  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:51.663585  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:51.663641  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:52.163471  412953 type.go:168] "Request Body" body=""
	I1210 07:49:52.163547  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:52.163896  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:52.663720  412953 type.go:168] "Request Body" body=""
	I1210 07:49:52.663796  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:52.664128  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:53.162655  412953 type.go:168] "Request Body" body=""
	I1210 07:49:53.162729  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:53.162984  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:53.662712  412953 type.go:168] "Request Body" body=""
	I1210 07:49:53.662785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:53.663162  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:54.162732  412953 type.go:168] "Request Body" body=""
	I1210 07:49:54.162812  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:54.163175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:54.163230  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:54.663007  412953 type.go:168] "Request Body" body=""
	I1210 07:49:54.663092  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:54.663348  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:55.162765  412953 type.go:168] "Request Body" body=""
	I1210 07:49:55.162839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:55.163203  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:55.662783  412953 type.go:168] "Request Body" body=""
	I1210 07:49:55.662865  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:55.663208  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:56.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:49:56.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:56.163134  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:56.662828  412953 type.go:168] "Request Body" body=""
	I1210 07:49:56.662903  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:56.663245  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:56.663302  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:57.162993  412953 type.go:168] "Request Body" body=""
	I1210 07:49:57.163092  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:57.163459  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:57.663133  412953 type.go:168] "Request Body" body=""
	I1210 07:49:57.663213  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:57.663561  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:58.163311  412953 type.go:168] "Request Body" body=""
	I1210 07:49:58.163387  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:58.163735  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:58.663518  412953 type.go:168] "Request Body" body=""
	I1210 07:49:58.663591  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:58.663925  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:58.663979  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:59.163560  412953 type.go:168] "Request Body" body=""
	I1210 07:49:59.163638  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:59.163958  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:59.662670  412953 type.go:168] "Request Body" body=""
	I1210 07:49:59.662749  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:59.663105  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:00.162806  412953 type.go:168] "Request Body" body=""
	I1210 07:50:00.162893  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:00.163244  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:00.663460  412953 type.go:168] "Request Body" body=""
	I1210 07:50:00.663540  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:00.664068  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:00.664196  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:01.162803  412953 type.go:168] "Request Body" body=""
	I1210 07:50:01.162896  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:01.163273  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:01.662786  412953 type.go:168] "Request Body" body=""
	I1210 07:50:01.662862  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:01.663242  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:02.163061  412953 type.go:168] "Request Body" body=""
	I1210 07:50:02.163133  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:02.163407  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:02.662773  412953 type.go:168] "Request Body" body=""
	I1210 07:50:02.662847  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:02.663177  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:03.162915  412953 type.go:168] "Request Body" body=""
	I1210 07:50:03.162990  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:03.163330  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:03.163387  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:03.662715  412953 type.go:168] "Request Body" body=""
	I1210 07:50:03.662782  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:03.663054  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:04.162790  412953 type.go:168] "Request Body" body=""
	I1210 07:50:04.162866  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:04.163214  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:04.663178  412953 type.go:168] "Request Body" body=""
	I1210 07:50:04.663273  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:04.663624  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:05.163424  412953 type.go:168] "Request Body" body=""
	I1210 07:50:05.163513  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:05.163807  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:05.163852  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:05.662744  412953 type.go:168] "Request Body" body=""
	I1210 07:50:05.662830  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:05.663225  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:06.162951  412953 type.go:168] "Request Body" body=""
	I1210 07:50:06.163056  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:06.163423  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:06.662712  412953 type.go:168] "Request Body" body=""
	I1210 07:50:06.662782  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:06.663083  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:07.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:50:07.162850  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:07.163201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:07.662893  412953 type.go:168] "Request Body" body=""
	I1210 07:50:07.662969  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:07.663309  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:07.663366  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:08.162691  412953 type.go:168] "Request Body" body=""
	I1210 07:50:08.162763  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:08.163063  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:08.662792  412953 type.go:168] "Request Body" body=""
	I1210 07:50:08.662868  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:08.663218  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:09.162919  412953 type.go:168] "Request Body" body=""
	I1210 07:50:09.163000  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:09.163347  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:09.662690  412953 type.go:168] "Request Body" body=""
	I1210 07:50:09.662770  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:09.663051  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:10.162778  412953 type.go:168] "Request Body" body=""
	I1210 07:50:10.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:10.163191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:10.163249  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:10.662950  412953 type.go:168] "Request Body" body=""
	I1210 07:50:10.663039  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:10.663349  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:11.162723  412953 type.go:168] "Request Body" body=""
	I1210 07:50:11.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:11.163072  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:11.662754  412953 type.go:168] "Request Body" body=""
	I1210 07:50:11.662830  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:11.663172  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:12.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:50:12.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:12.163253  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:12.163314  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:12.662721  412953 type.go:168] "Request Body" body=""
	I1210 07:50:12.662806  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:12.663135  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:13.162827  412953 type.go:168] "Request Body" body=""
	I1210 07:50:13.162909  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:13.163270  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:13.662849  412953 type.go:168] "Request Body" body=""
	I1210 07:50:13.662924  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:13.663237  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:14.162709  412953 type.go:168] "Request Body" body=""
	I1210 07:50:14.162777  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:14.163050  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:14.663072  412953 type.go:168] "Request Body" body=""
	I1210 07:50:14.663154  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:14.663480  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:14.663533  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:15.163298  412953 type.go:168] "Request Body" body=""
	I1210 07:50:15.163374  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:15.163686  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:15.662909  412953 node_ready.go:38] duration metric: took 6m0.000357427s for node "functional-314220" to be "Ready" ...
	I1210 07:50:15.669570  412953 out.go:203] 
	W1210 07:50:15.672493  412953 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 07:50:15.672574  412953 out.go:285] * 
	W1210 07:50:15.674736  412953 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:50:15.677520  412953 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 07:50:24 functional-314220 crio[5354]: time="2025-12-10T07:50:24.426285434Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=a6201215-74dc-42a0-9933-6ddae6ad702a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:25 functional-314220 crio[5354]: time="2025-12-10T07:50:25.473515285Z" level=info msg="Checking image status: minikube-local-cache-test:functional-314220" id=87665135-6c53-435d-b3ba-9221cc68b4f9 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:25 functional-314220 crio[5354]: time="2025-12-10T07:50:25.473693732Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 10 07:50:25 functional-314220 crio[5354]: time="2025-12-10T07:50:25.473735365Z" level=info msg="Image minikube-local-cache-test:functional-314220 not found" id=87665135-6c53-435d-b3ba-9221cc68b4f9 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:25 functional-314220 crio[5354]: time="2025-12-10T07:50:25.473819354Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-314220 found" id=87665135-6c53-435d-b3ba-9221cc68b4f9 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:25 functional-314220 crio[5354]: time="2025-12-10T07:50:25.497396997Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-314220" id=dc94c4c8-4f43-4a3e-b913-9d29f560667a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:25 functional-314220 crio[5354]: time="2025-12-10T07:50:25.49757272Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-314220 not found" id=dc94c4c8-4f43-4a3e-b913-9d29f560667a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:25 functional-314220 crio[5354]: time="2025-12-10T07:50:25.497620277Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-314220 found" id=dc94c4c8-4f43-4a3e-b913-9d29f560667a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:25 functional-314220 crio[5354]: time="2025-12-10T07:50:25.525853929Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-314220" id=6393c6e3-c34a-4f60-9c57-ab1b0e07d2b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:25 functional-314220 crio[5354]: time="2025-12-10T07:50:25.525992114Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-314220 not found" id=6393c6e3-c34a-4f60-9c57-ab1b0e07d2b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:25 functional-314220 crio[5354]: time="2025-12-10T07:50:25.526030276Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-314220 found" id=6393c6e3-c34a-4f60-9c57-ab1b0e07d2b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:26 functional-314220 crio[5354]: time="2025-12-10T07:50:26.504883294Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=4e2c2d6b-5704-434f-b99b-c5114ea47b56 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:26 functional-314220 crio[5354]: time="2025-12-10T07:50:26.831911404Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=e4ba2987-c5e1-4ce5-913b-3e7c1134ef57 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:26 functional-314220 crio[5354]: time="2025-12-10T07:50:26.832058262Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=e4ba2987-c5e1-4ce5-913b-3e7c1134ef57 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:26 functional-314220 crio[5354]: time="2025-12-10T07:50:26.832092461Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=e4ba2987-c5e1-4ce5-913b-3e7c1134ef57 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:27 functional-314220 crio[5354]: time="2025-12-10T07:50:27.396335947Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=3f049bd6-942f-4e06-9100-5081f476e811 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:27 functional-314220 crio[5354]: time="2025-12-10T07:50:27.39646554Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=3f049bd6-942f-4e06-9100-5081f476e811 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:27 functional-314220 crio[5354]: time="2025-12-10T07:50:27.396500494Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=3f049bd6-942f-4e06-9100-5081f476e811 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:27 functional-314220 crio[5354]: time="2025-12-10T07:50:27.424937488Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=21b99a54-1637-4e98-8582-fbb664a3115a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:27 functional-314220 crio[5354]: time="2025-12-10T07:50:27.425063513Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=21b99a54-1637-4e98-8582-fbb664a3115a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:27 functional-314220 crio[5354]: time="2025-12-10T07:50:27.425096678Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=21b99a54-1637-4e98-8582-fbb664a3115a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:27 functional-314220 crio[5354]: time="2025-12-10T07:50:27.461845777Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=bd754ccd-6835-4d76-aba4-e016306dde1c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:27 functional-314220 crio[5354]: time="2025-12-10T07:50:27.461978275Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=bd754ccd-6835-4d76-aba4-e016306dde1c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:27 functional-314220 crio[5354]: time="2025-12-10T07:50:27.462012868Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=bd754ccd-6835-4d76-aba4-e016306dde1c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:27 functional-314220 crio[5354]: time="2025-12-10T07:50:27.994690319Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=1facc1de-b2a6-4eec-b200-1dc97b1f6f51 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:50:29.476468    9358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:50:29.477003    9358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:50:29.478501    9358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:50:29.478933    9358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:50:29.480376    9358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	[Dec10 07:11] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:23] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:25] overlayfs: idmapped layers are currently not supported
	[  +0.065949] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 07:31] overlayfs: idmapped layers are currently not supported
	[Dec10 07:32] overlayfs: idmapped layers are currently not supported
	[Dec10 07:50] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 07:50:29 up  2:32,  0 user,  load average: 0.83, 0.40, 0.84
	Linux functional-314220 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:50:26 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:50:27 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1152.
	Dec 10 07:50:27 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:27 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:27 functional-314220 kubelet[9231]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:27 functional-314220 kubelet[9231]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:27 functional-314220 kubelet[9231]: E1210 07:50:27.711104    9231 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:50:27 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:50:27 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:50:28 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1153.
	Dec 10 07:50:28 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:28 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:28 functional-314220 kubelet[9255]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:28 functional-314220 kubelet[9255]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:28 functional-314220 kubelet[9255]: E1210 07:50:28.463061    9255 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:50:28 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:50:28 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:50:29 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1154.
	Dec 10 07:50:29 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:29 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:29 functional-314220 kubelet[9283]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:29 functional-314220 kubelet[9283]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:29 functional-314220 kubelet[9283]: E1210 07:50:29.227838    9283 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:50:29 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:50:29 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220: exit status 2 (364.31536ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-314220" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-314220 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-314220 get pods: exit status 1 (109.223783ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-314220 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-314220
helpers_test.go:244: (dbg) docker inspect functional-314220:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	        "Created": "2025-12-10T07:35:53.582567333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 407506,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:35:53.640615938Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hostname",
	        "HostsPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hosts",
	        "LogPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30-json.log",
	        "Name": "/functional-314220",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-314220:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-314220",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	                "LowerDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35-init/diff:/var/lib/docker/overlay2/888a54fd4518421bb2e30bc2f0825f232bffa2f260991e8ba288270662a6554b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-314220",
	                "Source": "/var/lib/docker/volumes/functional-314220/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-314220",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-314220",
	                "name.minikube.sigs.k8s.io": "functional-314220",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a7384df8731cf8cd48ab1dfb3eb79fa2c44cd426e6255cad7345dab2e19d0bfd",
	            "SandboxKey": "/var/run/docker/netns/a7384df8731c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-314220": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:6b:8c:59:cb:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "854df014d6f98ea92f9938b53e3af35a6df7a455941ebb6e97efa575a4d35230",
	                    "EndpointID": "285d153b9d616bc3d729ab764c8a3d3c5b28bb20787fa60a80cbf50869c0c24b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-314220",
	                        "82cb4926192d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220: exit status 2 (323.42166ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-314220 logs -n 25: (1.000667616s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-446865 image ls --format short --alsologtostderr                                                                                       │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image   │ functional-446865 image ls --format yaml --alsologtostderr                                                                                        │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ ssh     │ functional-446865 ssh pgrep buildkitd                                                                                                             │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ image   │ functional-446865 image build -t localhost/my-image:functional-446865 testdata/build --alsologtostderr                                            │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image   │ functional-446865 image ls --format json --alsologtostderr                                                                                        │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image   │ functional-446865 image ls                                                                                                                        │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image   │ functional-446865 image ls --format table --alsologtostderr                                                                                       │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ delete  │ -p functional-446865                                                                                                                              │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ start   │ -p functional-314220 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ start   │ -p functional-314220 --alsologtostderr -v=8                                                                                                       │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:44 UTC │                     │
	│ cache   │ functional-314220 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ functional-314220 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ functional-314220 cache add registry.k8s.io/pause:latest                                                                                          │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ functional-314220 cache add minikube-local-cache-test:functional-314220                                                                           │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ functional-314220 cache delete minikube-local-cache-test:functional-314220                                                                        │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ ssh     │ functional-314220 ssh sudo crictl images                                                                                                          │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ ssh     │ functional-314220 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ ssh     │ functional-314220 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │                     │
	│ cache   │ functional-314220 cache reload                                                                                                                    │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ ssh     │ functional-314220 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ kubectl │ functional-314220 kubectl -- --context functional-314220 get pods                                                                                 │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:44:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:44:10.487397  412953 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:44:10.487521  412953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:44:10.487566  412953 out.go:374] Setting ErrFile to fd 2...
	I1210 07:44:10.487572  412953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:44:10.487834  412953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:44:10.488205  412953 out.go:368] Setting JSON to false
	I1210 07:44:10.489052  412953 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8801,"bootTime":1765343850,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:44:10.489127  412953 start.go:143] virtualization:  
	I1210 07:44:10.492628  412953 out.go:179] * [functional-314220] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:44:10.495451  412953 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:44:10.495581  412953 notify.go:221] Checking for updates...
	I1210 07:44:10.501282  412953 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:44:10.504171  412953 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:10.506968  412953 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 07:44:10.509885  412953 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:44:10.512742  412953 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:44:10.516079  412953 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:44:10.516221  412953 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:44:10.539133  412953 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:44:10.539253  412953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:44:10.606789  412953 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:44:10.597593273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:44:10.606896  412953 docker.go:319] overlay module found
	I1210 07:44:10.611915  412953 out.go:179] * Using the docker driver based on existing profile
	I1210 07:44:10.614862  412953 start.go:309] selected driver: docker
	I1210 07:44:10.614885  412953 start.go:927] validating driver "docker" against &{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:44:10.614994  412953 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:44:10.615113  412953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:44:10.673141  412953 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 07:44:10.664474897 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:44:10.673572  412953 cni.go:84] Creating CNI manager for ""
	I1210 07:44:10.673631  412953 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:44:10.673679  412953 start.go:353] cluster config:
	{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:44:10.678679  412953 out.go:179] * Starting "functional-314220" primary control-plane node in "functional-314220" cluster
	I1210 07:44:10.681372  412953 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 07:44:10.684277  412953 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:44:10.687267  412953 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:44:10.687329  412953 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1210 07:44:10.687343  412953 cache.go:65] Caching tarball of preloaded images
	I1210 07:44:10.687350  412953 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:44:10.687434  412953 preload.go:238] Found /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1210 07:44:10.687444  412953 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 07:44:10.687550  412953 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/config.json ...
	I1210 07:44:10.707132  412953 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:44:10.707156  412953 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:44:10.707176  412953 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:44:10.707214  412953 start.go:360] acquireMachinesLock for functional-314220: {Name:mk24b69a1578b6f8eb2fecd1b5e1ddb99787d8b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:44:10.707283  412953 start.go:364] duration metric: took 45.104µs to acquireMachinesLock for "functional-314220"
	I1210 07:44:10.707306  412953 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:44:10.707317  412953 fix.go:54] fixHost starting: 
	I1210 07:44:10.707577  412953 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:44:10.723920  412953 fix.go:112] recreateIfNeeded on functional-314220: state=Running err=<nil>
	W1210 07:44:10.723951  412953 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:44:10.727176  412953 out.go:252] * Updating the running docker "functional-314220" container ...
	I1210 07:44:10.727205  412953 machine.go:94] provisionDockerMachine start ...
	I1210 07:44:10.727283  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:10.744553  412953 main.go:143] libmachine: Using SSH client type: native
	I1210 07:44:10.744931  412953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:44:10.744946  412953 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:44:10.878742  412953 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-314220
	
	I1210 07:44:10.878763  412953 ubuntu.go:182] provisioning hostname "functional-314220"
	I1210 07:44:10.878828  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:10.897712  412953 main.go:143] libmachine: Using SSH client type: native
	I1210 07:44:10.898057  412953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:44:10.898077  412953 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-314220 && echo "functional-314220" | sudo tee /etc/hostname
	I1210 07:44:11.052065  412953 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-314220
	
	I1210 07:44:11.052160  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:11.072344  412953 main.go:143] libmachine: Using SSH client type: native
	I1210 07:44:11.072686  412953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:44:11.072703  412953 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-314220' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-314220/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-314220' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:44:11.207289  412953 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:44:11.207317  412953 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-376671/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-376671/.minikube}
	I1210 07:44:11.207348  412953 ubuntu.go:190] setting up certificates
	I1210 07:44:11.207366  412953 provision.go:84] configureAuth start
	I1210 07:44:11.207429  412953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:44:11.224935  412953 provision.go:143] copyHostCerts
	I1210 07:44:11.224978  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem
	I1210 07:44:11.225021  412953 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem, removing ...
	I1210 07:44:11.225032  412953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem
	I1210 07:44:11.225107  412953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem (1082 bytes)
	I1210 07:44:11.225201  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem
	I1210 07:44:11.225224  412953 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem, removing ...
	I1210 07:44:11.225234  412953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem
	I1210 07:44:11.225268  412953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem (1123 bytes)
	I1210 07:44:11.225321  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem
	I1210 07:44:11.225345  412953 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem, removing ...
	I1210 07:44:11.225354  412953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem
	I1210 07:44:11.225380  412953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem (1675 bytes)
	I1210 07:44:11.225441  412953 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem org=jenkins.functional-314220 san=[127.0.0.1 192.168.49.2 functional-314220 localhost minikube]
	I1210 07:44:11.417392  412953 provision.go:177] copyRemoteCerts
	I1210 07:44:11.417460  412953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:44:11.417497  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:11.436410  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:11.535532  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 07:44:11.535603  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:44:11.553463  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 07:44:11.553526  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:44:11.571834  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 07:44:11.571892  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:44:11.590409  412953 provision.go:87] duration metric: took 383.016251ms to configureAuth
	I1210 07:44:11.590435  412953 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:44:11.590614  412953 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:44:11.590731  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:11.608257  412953 main.go:143] libmachine: Using SSH client type: native
	I1210 07:44:11.608571  412953 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:44:11.608596  412953 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 07:44:11.906129  412953 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 07:44:11.906170  412953 machine.go:97] duration metric: took 1.17895657s to provisionDockerMachine
	I1210 07:44:11.906181  412953 start.go:293] postStartSetup for "functional-314220" (driver="docker")
	I1210 07:44:11.906194  412953 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:44:11.906264  412953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:44:11.906303  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:11.923285  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:12.019543  412953 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:44:12.023176  412953 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1210 07:44:12.023203  412953 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1210 07:44:12.023208  412953 command_runner.go:130] > VERSION_ID="12"
	I1210 07:44:12.023217  412953 command_runner.go:130] > VERSION="12 (bookworm)"
	I1210 07:44:12.023222  412953 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1210 07:44:12.023226  412953 command_runner.go:130] > ID=debian
	I1210 07:44:12.023231  412953 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1210 07:44:12.023236  412953 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1210 07:44:12.023245  412953 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1210 07:44:12.023295  412953 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:44:12.023316  412953 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:44:12.023330  412953 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/addons for local assets ...
	I1210 07:44:12.023386  412953 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/files for local assets ...
	I1210 07:44:12.023472  412953 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> 3785282.pem in /etc/ssl/certs
	I1210 07:44:12.023483  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> /etc/ssl/certs/3785282.pem
	I1210 07:44:12.023563  412953 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts -> hosts in /etc/test/nested/copy/378528
	I1210 07:44:12.023571  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts -> /etc/test/nested/copy/378528/hosts
	I1210 07:44:12.023617  412953 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/378528
	I1210 07:44:12.031659  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 07:44:12.049814  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts --> /etc/test/nested/copy/378528/hosts (40 bytes)
	I1210 07:44:12.067644  412953 start.go:296] duration metric: took 161.447867ms for postStartSetup
	I1210 07:44:12.067748  412953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:44:12.067798  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:12.084856  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:12.184547  412953 command_runner.go:130] > 14%
	I1210 07:44:12.184639  412953 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:44:12.189562  412953 command_runner.go:130] > 169G
	I1210 07:44:12.189589  412953 fix.go:56] duration metric: took 1.4822703s for fixHost
	I1210 07:44:12.189600  412953 start.go:83] releasing machines lock for "functional-314220", held for 1.482305303s
	I1210 07:44:12.189668  412953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:44:12.206193  412953 ssh_runner.go:195] Run: cat /version.json
	I1210 07:44:12.206242  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:12.206484  412953 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:44:12.206547  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:12.229509  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:12.231766  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:12.322395  412953 command_runner.go:130] > {"iso_version": "v1.37.0-1765151505-21409", "kicbase_version": "v0.0.48-1765319469-22089", "minikube_version": "v1.37.0", "commit": "3b564f551de69272c9de22efc5b37f8a5b0156c7"}
	I1210 07:44:12.322525  412953 ssh_runner.go:195] Run: systemctl --version
	I1210 07:44:12.409743  412953 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1210 07:44:12.412779  412953 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1210 07:44:12.412818  412953 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1210 07:44:12.412894  412953 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 07:44:12.460937  412953 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 07:44:12.466609  412953 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 07:44:12.466697  412953 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:44:12.466802  412953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:44:12.474626  412953 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:44:12.474651  412953 start.go:496] detecting cgroup driver to use...
	I1210 07:44:12.474708  412953 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:44:12.474780  412953 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:44:12.490092  412953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:44:12.503562  412953 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:44:12.503627  412953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:44:12.518840  412953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:44:12.531838  412953 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:44:12.642559  412953 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:44:12.762873  412953 docker.go:234] disabling docker service ...
	I1210 07:44:12.762979  412953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:44:12.778725  412953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:44:12.791652  412953 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:44:12.911705  412953 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:44:13.035394  412953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:44:13.049695  412953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:44:13.065431  412953 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1210 07:44:13.065522  412953 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 07:44:13.065609  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.075381  412953 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 07:44:13.075482  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.085452  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.094855  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.104471  412953 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:44:13.112786  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.121728  412953 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.130205  412953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.139248  412953 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:44:13.145900  412953 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 07:44:13.147163  412953 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:44:13.154995  412953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:44:13.289205  412953 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 07:44:13.445871  412953 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 07:44:13.446002  412953 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 07:44:13.449677  412953 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1210 07:44:13.449750  412953 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 07:44:13.449774  412953 command_runner.go:130] > Device: 0,72	Inode: 1639        Links: 1
	I1210 07:44:13.449787  412953 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 07:44:13.449793  412953 command_runner.go:130] > Access: 2025-12-10 07:44:13.388138306 +0000
	I1210 07:44:13.449816  412953 command_runner.go:130] > Modify: 2025-12-10 07:44:13.388138306 +0000
	I1210 07:44:13.449826  412953 command_runner.go:130] > Change: 2025-12-10 07:44:13.388138306 +0000
	I1210 07:44:13.449830  412953 command_runner.go:130] >  Birth: -
	I1210 07:44:13.449864  412953 start.go:564] Will wait 60s for crictl version
	I1210 07:44:13.449928  412953 ssh_runner.go:195] Run: which crictl
	I1210 07:44:13.453538  412953 command_runner.go:130] > /usr/local/bin/crictl
	I1210 07:44:13.453678  412953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:44:13.477475  412953 command_runner.go:130] > Version:  0.1.0
	I1210 07:44:13.477498  412953 command_runner.go:130] > RuntimeName:  cri-o
	I1210 07:44:13.477503  412953 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1210 07:44:13.477509  412953 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 07:44:13.477520  412953 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 07:44:13.477602  412953 ssh_runner.go:195] Run: crio --version
	I1210 07:44:13.505751  412953 command_runner.go:130] > crio version 1.34.3
	I1210 07:44:13.505796  412953 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1210 07:44:13.505803  412953 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1210 07:44:13.505808  412953 command_runner.go:130] >    GitTreeState:   dirty
	I1210 07:44:13.505813  412953 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1210 07:44:13.505817  412953 command_runner.go:130] >    GoVersion:      go1.24.6
	I1210 07:44:13.505821  412953 command_runner.go:130] >    Compiler:       gc
	I1210 07:44:13.505826  412953 command_runner.go:130] >    Platform:       linux/arm64
	I1210 07:44:13.505835  412953 command_runner.go:130] >    Linkmode:       static
	I1210 07:44:13.505838  412953 command_runner.go:130] >    BuildTags:
	I1210 07:44:13.505844  412953 command_runner.go:130] >      static
	I1210 07:44:13.505848  412953 command_runner.go:130] >      netgo
	I1210 07:44:13.505852  412953 command_runner.go:130] >      osusergo
	I1210 07:44:13.505859  412953 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1210 07:44:13.505863  412953 command_runner.go:130] >      seccomp
	I1210 07:44:13.505874  412953 command_runner.go:130] >      apparmor
	I1210 07:44:13.505877  412953 command_runner.go:130] >      selinux
	I1210 07:44:13.505881  412953 command_runner.go:130] >    LDFlags:          unknown
	I1210 07:44:13.505886  412953 command_runner.go:130] >    SeccompEnabled:   true
	I1210 07:44:13.505895  412953 command_runner.go:130] >    AppArmorEnabled:  false
	I1210 07:44:13.507701  412953 ssh_runner.go:195] Run: crio --version
	I1210 07:44:13.535170  412953 command_runner.go:130] > crio version 1.34.3
	I1210 07:44:13.535233  412953 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1210 07:44:13.535254  412953 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1210 07:44:13.535275  412953 command_runner.go:130] >    GitTreeState:   dirty
	I1210 07:44:13.535296  412953 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1210 07:44:13.535314  412953 command_runner.go:130] >    GoVersion:      go1.24.6
	I1210 07:44:13.535334  412953 command_runner.go:130] >    Compiler:       gc
	I1210 07:44:13.535358  412953 command_runner.go:130] >    Platform:       linux/arm64
	I1210 07:44:13.535377  412953 command_runner.go:130] >    Linkmode:       static
	I1210 07:44:13.535395  412953 command_runner.go:130] >    BuildTags:
	I1210 07:44:13.535414  412953 command_runner.go:130] >      static
	I1210 07:44:13.535432  412953 command_runner.go:130] >      netgo
	I1210 07:44:13.535451  412953 command_runner.go:130] >      osusergo
	I1210 07:44:13.535471  412953 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1210 07:44:13.535489  412953 command_runner.go:130] >      seccomp
	I1210 07:44:13.535518  412953 command_runner.go:130] >      apparmor
	I1210 07:44:13.535548  412953 command_runner.go:130] >      selinux
	I1210 07:44:13.535566  412953 command_runner.go:130] >    LDFlags:          unknown
	I1210 07:44:13.535590  412953 command_runner.go:130] >    SeccompEnabled:   true
	I1210 07:44:13.535609  412953 command_runner.go:130] >    AppArmorEnabled:  false
	I1210 07:44:13.540516  412953 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 07:44:13.543340  412953 cli_runner.go:164] Run: docker network inspect functional-314220 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:44:13.558881  412953 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 07:44:13.562785  412953 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1210 07:44:13.562964  412953 kubeadm.go:884] updating cluster {Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:44:13.563103  412953 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:44:13.563170  412953 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:44:13.592036  412953 command_runner.go:130] > {
	I1210 07:44:13.592059  412953 command_runner.go:130] >   "images":  [
	I1210 07:44:13.592064  412953 command_runner.go:130] >     {
	I1210 07:44:13.592073  412953 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1210 07:44:13.592083  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592089  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1210 07:44:13.592093  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592096  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592118  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1210 07:44:13.592130  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1210 07:44:13.592138  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592144  412953 command_runner.go:130] >       "size":  "111333938",
	I1210 07:44:13.592154  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592159  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592163  412953 command_runner.go:130] >     },
	I1210 07:44:13.592169  412953 command_runner.go:130] >     {
	I1210 07:44:13.592176  412953 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1210 07:44:13.592183  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592189  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 07:44:13.592192  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592196  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592207  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1210 07:44:13.592217  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1210 07:44:13.592221  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592225  412953 command_runner.go:130] >       "size":  "29037500",
	I1210 07:44:13.592231  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592239  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592246  412953 command_runner.go:130] >     },
	I1210 07:44:13.592249  412953 command_runner.go:130] >     {
	I1210 07:44:13.592255  412953 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 07:44:13.592264  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592269  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 07:44:13.592272  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592278  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592286  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1210 07:44:13.592297  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1210 07:44:13.592300  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592306  412953 command_runner.go:130] >       "size":  "74491780",
	I1210 07:44:13.592311  412953 command_runner.go:130] >       "username":  "nonroot",
	I1210 07:44:13.592317  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592320  412953 command_runner.go:130] >     },
	I1210 07:44:13.592329  412953 command_runner.go:130] >     {
	I1210 07:44:13.592338  412953 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1210 07:44:13.592342  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592354  412953 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1210 07:44:13.592357  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592361  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592374  412953 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1210 07:44:13.592381  412953 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1210 07:44:13.592387  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592391  412953 command_runner.go:130] >       "size":  "60857170",
	I1210 07:44:13.592395  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592401  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.592405  412953 command_runner.go:130] >       },
	I1210 07:44:13.592420  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592424  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592429  412953 command_runner.go:130] >     },
	I1210 07:44:13.592433  412953 command_runner.go:130] >     {
	I1210 07:44:13.592446  412953 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1210 07:44:13.592450  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592457  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1210 07:44:13.592461  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592465  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592474  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1210 07:44:13.592484  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1210 07:44:13.592488  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592494  412953 command_runner.go:130] >       "size":  "84949999",
	I1210 07:44:13.592498  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592522  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.592525  412953 command_runner.go:130] >       },
	I1210 07:44:13.592530  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592538  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592541  412953 command_runner.go:130] >     },
	I1210 07:44:13.592545  412953 command_runner.go:130] >     {
	I1210 07:44:13.592556  412953 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1210 07:44:13.592563  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592569  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1210 07:44:13.592579  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592582  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592591  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1210 07:44:13.592602  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1210 07:44:13.592606  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592616  412953 command_runner.go:130] >       "size":  "72170325",
	I1210 07:44:13.592619  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592623  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.592628  412953 command_runner.go:130] >       },
	I1210 07:44:13.592633  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592639  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592642  412953 command_runner.go:130] >     },
	I1210 07:44:13.592645  412953 command_runner.go:130] >     {
	I1210 07:44:13.592652  412953 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1210 07:44:13.592663  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592669  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1210 07:44:13.592674  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592678  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592691  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1210 07:44:13.592702  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1210 07:44:13.592706  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592712  412953 command_runner.go:130] >       "size":  "74106775",
	I1210 07:44:13.592717  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592723  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592726  412953 command_runner.go:130] >     },
	I1210 07:44:13.592729  412953 command_runner.go:130] >     {
	I1210 07:44:13.592735  412953 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1210 07:44:13.592741  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592747  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1210 07:44:13.592750  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592764  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592772  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1210 07:44:13.592793  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1210 07:44:13.592800  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592804  412953 command_runner.go:130] >       "size":  "49822549",
	I1210 07:44:13.592808  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592817  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.592820  412953 command_runner.go:130] >       },
	I1210 07:44:13.592824  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592830  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.592834  412953 command_runner.go:130] >     },
	I1210 07:44:13.592843  412953 command_runner.go:130] >     {
	I1210 07:44:13.592849  412953 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 07:44:13.592853  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.592858  412953 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 07:44:13.592866  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592870  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.592878  412953 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1210 07:44:13.592888  412953 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1210 07:44:13.592892  412953 command_runner.go:130] >       ],
	I1210 07:44:13.592898  412953 command_runner.go:130] >       "size":  "519884",
	I1210 07:44:13.592902  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.592911  412953 command_runner.go:130] >         "value":  "65535"
	I1210 07:44:13.592914  412953 command_runner.go:130] >       },
	I1210 07:44:13.592918  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.592924  412953 command_runner.go:130] >       "pinned":  true
	I1210 07:44:13.592927  412953 command_runner.go:130] >     }
	I1210 07:44:13.592932  412953 command_runner.go:130] >   ]
	I1210 07:44:13.592935  412953 command_runner.go:130] > }
	I1210 07:44:13.595219  412953 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:44:13.595245  412953 crio.go:433] Images already preloaded, skipping extraction
	I1210 07:44:13.595305  412953 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:44:13.620833  412953 command_runner.go:130] > {
	I1210 07:44:13.620851  412953 command_runner.go:130] >   "images":  [
	I1210 07:44:13.620856  412953 command_runner.go:130] >     {
	I1210 07:44:13.620865  412953 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1210 07:44:13.620870  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.620884  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1210 07:44:13.620888  412953 command_runner.go:130] >       ],
	I1210 07:44:13.620896  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.620905  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1210 07:44:13.620913  412953 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1210 07:44:13.620917  412953 command_runner.go:130] >       ],
	I1210 07:44:13.620921  412953 command_runner.go:130] >       "size":  "111333938",
	I1210 07:44:13.620925  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.620930  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.620933  412953 command_runner.go:130] >     },
	I1210 07:44:13.620936  412953 command_runner.go:130] >     {
	I1210 07:44:13.620943  412953 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1210 07:44:13.620947  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.620952  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 07:44:13.620955  412953 command_runner.go:130] >       ],
	I1210 07:44:13.620958  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.620966  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1210 07:44:13.620975  412953 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1210 07:44:13.620978  412953 command_runner.go:130] >       ],
	I1210 07:44:13.620982  412953 command_runner.go:130] >       "size":  "29037500",
	I1210 07:44:13.620985  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.620991  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.620994  412953 command_runner.go:130] >     },
	I1210 07:44:13.620997  412953 command_runner.go:130] >     {
	I1210 07:44:13.621003  412953 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1210 07:44:13.621007  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621012  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1210 07:44:13.621015  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621019  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621027  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1210 07:44:13.621035  412953 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1210 07:44:13.621038  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621042  412953 command_runner.go:130] >       "size":  "74491780",
	I1210 07:44:13.621046  412953 command_runner.go:130] >       "username":  "nonroot",
	I1210 07:44:13.621049  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621056  412953 command_runner.go:130] >     },
	I1210 07:44:13.621059  412953 command_runner.go:130] >     {
	I1210 07:44:13.621066  412953 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1210 07:44:13.621070  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621075  412953 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1210 07:44:13.621079  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621083  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621091  412953 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1210 07:44:13.621098  412953 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1210 07:44:13.621102  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621105  412953 command_runner.go:130] >       "size":  "60857170",
	I1210 07:44:13.621109  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621113  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.621116  412953 command_runner.go:130] >       },
	I1210 07:44:13.621124  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621128  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621131  412953 command_runner.go:130] >     },
	I1210 07:44:13.621134  412953 command_runner.go:130] >     {
	I1210 07:44:13.621143  412953 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1210 07:44:13.621147  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621152  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1210 07:44:13.621156  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621159  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621167  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1210 07:44:13.621175  412953 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1210 07:44:13.621178  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621182  412953 command_runner.go:130] >       "size":  "84949999",
	I1210 07:44:13.621185  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621189  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.621192  412953 command_runner.go:130] >       },
	I1210 07:44:13.621196  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621199  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621202  412953 command_runner.go:130] >     },
	I1210 07:44:13.621208  412953 command_runner.go:130] >     {
	I1210 07:44:13.621214  412953 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1210 07:44:13.621218  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621224  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1210 07:44:13.621227  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621231  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621239  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1210 07:44:13.621247  412953 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1210 07:44:13.621250  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621255  412953 command_runner.go:130] >       "size":  "72170325",
	I1210 07:44:13.621258  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621262  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.621265  412953 command_runner.go:130] >       },
	I1210 07:44:13.621268  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621272  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621275  412953 command_runner.go:130] >     },
	I1210 07:44:13.621278  412953 command_runner.go:130] >     {
	I1210 07:44:13.621285  412953 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1210 07:44:13.621289  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621294  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1210 07:44:13.621297  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621301  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621309  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1210 07:44:13.621317  412953 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1210 07:44:13.621320  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621324  412953 command_runner.go:130] >       "size":  "74106775",
	I1210 07:44:13.621327  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621331  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621334  412953 command_runner.go:130] >     },
	I1210 07:44:13.621337  412953 command_runner.go:130] >     {
	I1210 07:44:13.621343  412953 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1210 07:44:13.621347  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621352  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1210 07:44:13.621359  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621363  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621371  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1210 07:44:13.621390  412953 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1210 07:44:13.621393  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621397  412953 command_runner.go:130] >       "size":  "49822549",
	I1210 07:44:13.621401  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621404  412953 command_runner.go:130] >         "value":  "0"
	I1210 07:44:13.621408  412953 command_runner.go:130] >       },
	I1210 07:44:13.621411  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621415  412953 command_runner.go:130] >       "pinned":  false
	I1210 07:44:13.621418  412953 command_runner.go:130] >     },
	I1210 07:44:13.621421  412953 command_runner.go:130] >     {
	I1210 07:44:13.621427  412953 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1210 07:44:13.621431  412953 command_runner.go:130] >       "repoTags":  [
	I1210 07:44:13.621435  412953 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1210 07:44:13.621438  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621442  412953 command_runner.go:130] >       "repoDigests":  [
	I1210 07:44:13.621449  412953 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1210 07:44:13.621456  412953 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1210 07:44:13.621459  412953 command_runner.go:130] >       ],
	I1210 07:44:13.621463  412953 command_runner.go:130] >       "size":  "519884",
	I1210 07:44:13.621466  412953 command_runner.go:130] >       "uid":  {
	I1210 07:44:13.621470  412953 command_runner.go:130] >         "value":  "65535"
	I1210 07:44:13.621473  412953 command_runner.go:130] >       },
	I1210 07:44:13.621477  412953 command_runner.go:130] >       "username":  "",
	I1210 07:44:13.621481  412953 command_runner.go:130] >       "pinned":  true
	I1210 07:44:13.621483  412953 command_runner.go:130] >     }
	I1210 07:44:13.621486  412953 command_runner.go:130] >   ]
	I1210 07:44:13.621490  412953 command_runner.go:130] > }
	I1210 07:44:13.622855  412953 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:44:13.622877  412953 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:44:13.622884  412953 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1210 07:44:13.622995  412953 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-314220 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:44:13.623104  412953 ssh_runner.go:195] Run: crio config
	I1210 07:44:13.670610  412953 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1210 07:44:13.670640  412953 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1210 07:44:13.670648  412953 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1210 07:44:13.670652  412953 command_runner.go:130] > #
	I1210 07:44:13.670659  412953 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1210 07:44:13.670667  412953 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1210 07:44:13.670674  412953 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1210 07:44:13.670691  412953 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1210 07:44:13.670699  412953 command_runner.go:130] > # reload'.
	I1210 07:44:13.670706  412953 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1210 07:44:13.670713  412953 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1210 07:44:13.670722  412953 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1210 07:44:13.670728  412953 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1210 07:44:13.670733  412953 command_runner.go:130] > [crio]
	I1210 07:44:13.670747  412953 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1210 07:44:13.670755  412953 command_runner.go:130] > # containers images, in this directory.
	I1210 07:44:13.670764  412953 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1210 07:44:13.670774  412953 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1210 07:44:13.670784  412953 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1210 07:44:13.670792  412953 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1210 07:44:13.670799  412953 command_runner.go:130] > # imagestore = ""
	I1210 07:44:13.670805  412953 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1210 07:44:13.670812  412953 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1210 07:44:13.670819  412953 command_runner.go:130] > # storage_driver = "overlay"
	I1210 07:44:13.670826  412953 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1210 07:44:13.670832  412953 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1210 07:44:13.670839  412953 command_runner.go:130] > # storage_option = [
	I1210 07:44:13.670842  412953 command_runner.go:130] > # ]
	I1210 07:44:13.670848  412953 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1210 07:44:13.670854  412953 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1210 07:44:13.670864  412953 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1210 07:44:13.670876  412953 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1210 07:44:13.670886  412953 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1210 07:44:13.670890  412953 command_runner.go:130] > # always happen on a node reboot
	I1210 07:44:13.670897  412953 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1210 07:44:13.670908  412953 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1210 07:44:13.670916  412953 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1210 07:44:13.670921  412953 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1210 07:44:13.670927  412953 command_runner.go:130] > # version_file_persist = ""
	I1210 07:44:13.670948  412953 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1210 07:44:13.670957  412953 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1210 07:44:13.670965  412953 command_runner.go:130] > # internal_wipe = true
	I1210 07:44:13.670973  412953 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1210 07:44:13.670982  412953 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1210 07:44:13.670986  412953 command_runner.go:130] > # internal_repair = true
	I1210 07:44:13.670992  412953 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1210 07:44:13.671000  412953 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1210 07:44:13.671005  412953 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1210 07:44:13.671033  412953 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1210 07:44:13.671041  412953 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1210 07:44:13.671047  412953 command_runner.go:130] > [crio.api]
	I1210 07:44:13.671052  412953 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1210 07:44:13.671057  412953 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1210 07:44:13.671064  412953 command_runner.go:130] > # IP address on which the stream server will listen.
	I1210 07:44:13.671297  412953 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1210 07:44:13.671315  412953 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1210 07:44:13.671322  412953 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1210 07:44:13.671326  412953 command_runner.go:130] > # stream_port = "0"
	I1210 07:44:13.671356  412953 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1210 07:44:13.671366  412953 command_runner.go:130] > # stream_enable_tls = false
	I1210 07:44:13.671373  412953 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1210 07:44:13.671558  412953 command_runner.go:130] > # stream_idle_timeout = ""
	I1210 07:44:13.671575  412953 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1210 07:44:13.671582  412953 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1210 07:44:13.671587  412953 command_runner.go:130] > # stream_tls_cert = ""
	I1210 07:44:13.671593  412953 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1210 07:44:13.671617  412953 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1210 07:44:13.671819  412953 command_runner.go:130] > # stream_tls_key = ""
	I1210 07:44:13.671835  412953 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1210 07:44:13.671853  412953 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1210 07:44:13.671864  412953 command_runner.go:130] > # automatically pick up the changes.
	I1210 07:44:13.671868  412953 command_runner.go:130] > # stream_tls_ca = ""
	I1210 07:44:13.671887  412953 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 07:44:13.671896  412953 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1210 07:44:13.671903  412953 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 07:44:13.672102  412953 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1210 07:44:13.672121  412953 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1210 07:44:13.672128  412953 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1210 07:44:13.672131  412953 command_runner.go:130] > [crio.runtime]
	I1210 07:44:13.672137  412953 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1210 07:44:13.672162  412953 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1210 07:44:13.672172  412953 command_runner.go:130] > # "nofile=1024:2048"
	I1210 07:44:13.672179  412953 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1210 07:44:13.672183  412953 command_runner.go:130] > # default_ulimits = [
	I1210 07:44:13.672188  412953 command_runner.go:130] > # ]
	I1210 07:44:13.672195  412953 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1210 07:44:13.672201  412953 command_runner.go:130] > # no_pivot = false
	I1210 07:44:13.672207  412953 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1210 07:44:13.672214  412953 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1210 07:44:13.672219  412953 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1210 07:44:13.672235  412953 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1210 07:44:13.672241  412953 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1210 07:44:13.672248  412953 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 07:44:13.672445  412953 command_runner.go:130] > # conmon = ""
	I1210 07:44:13.672461  412953 command_runner.go:130] > # Cgroup setting for conmon
	I1210 07:44:13.672469  412953 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1210 07:44:13.672473  412953 command_runner.go:130] > conmon_cgroup = "pod"
	I1210 07:44:13.672480  412953 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1210 07:44:13.672502  412953 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1210 07:44:13.672522  412953 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 07:44:13.672875  412953 command_runner.go:130] > # conmon_env = [
	I1210 07:44:13.672888  412953 command_runner.go:130] > # ]
	I1210 07:44:13.672895  412953 command_runner.go:130] > # Additional environment variables to set for all the
	I1210 07:44:13.672900  412953 command_runner.go:130] > # containers. These are overridden if set in the
	I1210 07:44:13.672907  412953 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1210 07:44:13.673114  412953 command_runner.go:130] > # default_env = [
	I1210 07:44:13.673128  412953 command_runner.go:130] > # ]
	I1210 07:44:13.673149  412953 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1210 07:44:13.673177  412953 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1210 07:44:13.673192  412953 command_runner.go:130] > # selinux = false
	I1210 07:44:13.673200  412953 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1210 07:44:13.673211  412953 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1210 07:44:13.673216  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.673222  412953 command_runner.go:130] > # seccomp_profile = ""
	I1210 07:44:13.673228  412953 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1210 07:44:13.673240  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.673428  412953 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1210 07:44:13.673444  412953 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1210 07:44:13.673452  412953 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1210 07:44:13.673459  412953 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1210 07:44:13.673478  412953 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1210 07:44:13.673488  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.673492  412953 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1210 07:44:13.673498  412953 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1210 07:44:13.673505  412953 command_runner.go:130] > # the cgroup blockio controller.
	I1210 07:44:13.673509  412953 command_runner.go:130] > # blockio_config_file = ""
	I1210 07:44:13.673515  412953 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1210 07:44:13.673522  412953 command_runner.go:130] > # blockio parameters.
	I1210 07:44:13.673725  412953 command_runner.go:130] > # blockio_reload = false
	I1210 07:44:13.673738  412953 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1210 07:44:13.673742  412953 command_runner.go:130] > # irqbalance daemon.
	I1210 07:44:13.673748  412953 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1210 07:44:13.673757  412953 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1210 07:44:13.673788  412953 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1210 07:44:13.673801  412953 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1210 07:44:13.673807  412953 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1210 07:44:13.673816  412953 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1210 07:44:13.673821  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.673830  412953 command_runner.go:130] > # rdt_config_file = ""
	I1210 07:44:13.673837  412953 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1210 07:44:13.674053  412953 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1210 07:44:13.674071  412953 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1210 07:44:13.674076  412953 command_runner.go:130] > # separate_pull_cgroup = ""
	I1210 07:44:13.674083  412953 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1210 07:44:13.674102  412953 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1210 07:44:13.674116  412953 command_runner.go:130] > # will be added.
	I1210 07:44:13.674121  412953 command_runner.go:130] > # default_capabilities = [
	I1210 07:44:13.674343  412953 command_runner.go:130] > # 	"CHOWN",
	I1210 07:44:13.674352  412953 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1210 07:44:13.674356  412953 command_runner.go:130] > # 	"FSETID",
	I1210 07:44:13.674359  412953 command_runner.go:130] > # 	"FOWNER",
	I1210 07:44:13.674363  412953 command_runner.go:130] > # 	"SETGID",
	I1210 07:44:13.674366  412953 command_runner.go:130] > # 	"SETUID",
	I1210 07:44:13.674423  412953 command_runner.go:130] > # 	"SETPCAP",
	I1210 07:44:13.674435  412953 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1210 07:44:13.674593  412953 command_runner.go:130] > # 	"KILL",
	I1210 07:44:13.674604  412953 command_runner.go:130] > # ]
	I1210 07:44:13.674621  412953 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1210 07:44:13.674632  412953 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1210 07:44:13.674812  412953 command_runner.go:130] > # add_inheritable_capabilities = false
	I1210 07:44:13.674829  412953 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1210 07:44:13.674836  412953 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 07:44:13.674844  412953 command_runner.go:130] > default_sysctls = [
	I1210 07:44:13.674849  412953 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1210 07:44:13.674855  412953 command_runner.go:130] > ]
	I1210 07:44:13.674860  412953 command_runner.go:130] > # List of devices on the host that a
	I1210 07:44:13.674883  412953 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1210 07:44:13.674902  412953 command_runner.go:130] > # allowed_devices = [
	I1210 07:44:13.675282  412953 command_runner.go:130] > # 	"/dev/fuse",
	I1210 07:44:13.675296  412953 command_runner.go:130] > # 	"/dev/net/tun",
	I1210 07:44:13.675300  412953 command_runner.go:130] > # ]
	I1210 07:44:13.675305  412953 command_runner.go:130] > # List of additional devices. specified as
	I1210 07:44:13.675313  412953 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1210 07:44:13.675339  412953 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1210 07:44:13.675346  412953 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 07:44:13.675350  412953 command_runner.go:130] > # additional_devices = [
	I1210 07:44:13.675524  412953 command_runner.go:130] > # ]
	I1210 07:44:13.675539  412953 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1210 07:44:13.675543  412953 command_runner.go:130] > # cdi_spec_dirs = [
	I1210 07:44:13.675549  412953 command_runner.go:130] > # 	"/etc/cdi",
	I1210 07:44:13.675552  412953 command_runner.go:130] > # 	"/var/run/cdi",
	I1210 07:44:13.675555  412953 command_runner.go:130] > # ]
	I1210 07:44:13.675562  412953 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1210 07:44:13.675584  412953 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1210 07:44:13.675594  412953 command_runner.go:130] > # Defaults to false.
	I1210 07:44:13.675951  412953 command_runner.go:130] > # device_ownership_from_security_context = false
	I1210 07:44:13.675970  412953 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1210 07:44:13.675978  412953 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1210 07:44:13.675982  412953 command_runner.go:130] > # hooks_dir = [
	I1210 07:44:13.676213  412953 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1210 07:44:13.676224  412953 command_runner.go:130] > # ]
	I1210 07:44:13.676231  412953 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1210 07:44:13.676237  412953 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1210 07:44:13.676246  412953 command_runner.go:130] > # its default mounts from the following two files:
	I1210 07:44:13.676261  412953 command_runner.go:130] > #
	I1210 07:44:13.676273  412953 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1210 07:44:13.676280  412953 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1210 07:44:13.676286  412953 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1210 07:44:13.676291  412953 command_runner.go:130] > #
	I1210 07:44:13.676298  412953 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1210 07:44:13.676304  412953 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1210 07:44:13.676313  412953 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1210 07:44:13.676318  412953 command_runner.go:130] > #      only add mounts it finds in this file.
	I1210 07:44:13.676321  412953 command_runner.go:130] > #
	I1210 07:44:13.676325  412953 command_runner.go:130] > # default_mounts_file = ""
	I1210 07:44:13.676345  412953 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1210 07:44:13.676358  412953 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1210 07:44:13.676363  412953 command_runner.go:130] > # pids_limit = -1
	I1210 07:44:13.676375  412953 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1210 07:44:13.676381  412953 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1210 07:44:13.676391  412953 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1210 07:44:13.676400  412953 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1210 07:44:13.676412  412953 command_runner.go:130] > # log_size_max = -1
	I1210 07:44:13.676423  412953 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1210 07:44:13.676626  412953 command_runner.go:130] > # log_to_journald = false
	I1210 07:44:13.676643  412953 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1210 07:44:13.676650  412953 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1210 07:44:13.676677  412953 command_runner.go:130] > # Path to directory for container attach sockets.
	I1210 07:44:13.676879  412953 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1210 07:44:13.676891  412953 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1210 07:44:13.676896  412953 command_runner.go:130] > # bind_mount_prefix = ""
	I1210 07:44:13.676903  412953 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1210 07:44:13.676909  412953 command_runner.go:130] > # read_only = false
	I1210 07:44:13.676916  412953 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1210 07:44:13.676942  412953 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1210 07:44:13.676953  412953 command_runner.go:130] > # live configuration reload.
	I1210 07:44:13.676956  412953 command_runner.go:130] > # log_level = "info"
	I1210 07:44:13.676967  412953 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1210 07:44:13.676977  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.677149  412953 command_runner.go:130] > # log_filter = ""
	I1210 07:44:13.677166  412953 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1210 07:44:13.677173  412953 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1210 07:44:13.677177  412953 command_runner.go:130] > # separated by comma.
	I1210 07:44:13.677186  412953 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 07:44:13.677212  412953 command_runner.go:130] > # uid_mappings = ""
	I1210 07:44:13.677225  412953 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1210 07:44:13.677231  412953 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1210 07:44:13.677238  412953 command_runner.go:130] > # separated by comma.
	I1210 07:44:13.677246  412953 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 07:44:13.677420  412953 command_runner.go:130] > # gid_mappings = ""
	I1210 07:44:13.677432  412953 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1210 07:44:13.677439  412953 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 07:44:13.677446  412953 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 07:44:13.677455  412953 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 07:44:13.677480  412953 command_runner.go:130] > # minimum_mappable_uid = -1
	I1210 07:44:13.677493  412953 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1210 07:44:13.677500  412953 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 07:44:13.677512  412953 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 07:44:13.677522  412953 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 07:44:13.677681  412953 command_runner.go:130] > # minimum_mappable_gid = -1
	I1210 07:44:13.677697  412953 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1210 07:44:13.677705  412953 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1210 07:44:13.677711  412953 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1210 07:44:13.677936  412953 command_runner.go:130] > # ctr_stop_timeout = 30
	I1210 07:44:13.677953  412953 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1210 07:44:13.677960  412953 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1210 07:44:13.677965  412953 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1210 07:44:13.677970  412953 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1210 07:44:13.677991  412953 command_runner.go:130] > # drop_infra_ctr = true
	I1210 07:44:13.678004  412953 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1210 07:44:13.678011  412953 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1210 07:44:13.678020  412953 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1210 07:44:13.678031  412953 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1210 07:44:13.678039  412953 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1210 07:44:13.678048  412953 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1210 07:44:13.678054  412953 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1210 07:44:13.678068  412953 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1210 07:44:13.678282  412953 command_runner.go:130] > # shared_cpuset = ""
	I1210 07:44:13.678299  412953 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1210 07:44:13.678306  412953 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1210 07:44:13.678310  412953 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1210 07:44:13.678328  412953 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1210 07:44:13.678337  412953 command_runner.go:130] > # pinns_path = ""
	I1210 07:44:13.678343  412953 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1210 07:44:13.678349  412953 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1210 07:44:13.678540  412953 command_runner.go:130] > # enable_criu_support = true
	I1210 07:44:13.678551  412953 command_runner.go:130] > # Enable/disable the generation of the container,
	I1210 07:44:13.678558  412953 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1210 07:44:13.678563  412953 command_runner.go:130] > # enable_pod_events = false
	I1210 07:44:13.678572  412953 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1210 07:44:13.678599  412953 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1210 07:44:13.678604  412953 command_runner.go:130] > # default_runtime = "crun"
	I1210 07:44:13.678609  412953 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1210 07:44:13.678622  412953 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1210 07:44:13.678632  412953 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1210 07:44:13.678642  412953 command_runner.go:130] > # creation as a file is not desired either.
	I1210 07:44:13.678651  412953 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1210 07:44:13.678663  412953 command_runner.go:130] > # the hostname is being managed dynamically.
	I1210 07:44:13.678672  412953 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1210 07:44:13.678923  412953 command_runner.go:130] > # ]
	I1210 07:44:13.678950  412953 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1210 07:44:13.678958  412953 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1210 07:44:13.678972  412953 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1210 07:44:13.678982  412953 command_runner.go:130] > # Each entry in the table should follow the format:
	I1210 07:44:13.678985  412953 command_runner.go:130] > #
	I1210 07:44:13.678990  412953 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1210 07:44:13.678995  412953 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1210 07:44:13.679001  412953 command_runner.go:130] > # runtime_type = "oci"
	I1210 07:44:13.679006  412953 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1210 07:44:13.679035  412953 command_runner.go:130] > # inherit_default_runtime = false
	I1210 07:44:13.679045  412953 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1210 07:44:13.679050  412953 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1210 07:44:13.679054  412953 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1210 07:44:13.679060  412953 command_runner.go:130] > # monitor_env = []
	I1210 07:44:13.679065  412953 command_runner.go:130] > # privileged_without_host_devices = false
	I1210 07:44:13.679069  412953 command_runner.go:130] > # allowed_annotations = []
	I1210 07:44:13.679076  412953 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1210 07:44:13.679085  412953 command_runner.go:130] > # no_sync_log = false
	I1210 07:44:13.679101  412953 command_runner.go:130] > # default_annotations = {}
	I1210 07:44:13.679107  412953 command_runner.go:130] > # stream_websockets = false
	I1210 07:44:13.679111  412953 command_runner.go:130] > # seccomp_profile = ""
	I1210 07:44:13.679142  412953 command_runner.go:130] > # Where:
	I1210 07:44:13.679152  412953 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1210 07:44:13.679158  412953 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1210 07:44:13.679174  412953 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1210 07:44:13.679188  412953 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1210 07:44:13.679194  412953 command_runner.go:130] > #   in $PATH.
	I1210 07:44:13.679200  412953 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1210 07:44:13.679207  412953 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1210 07:44:13.679213  412953 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1210 07:44:13.679219  412953 command_runner.go:130] > #   state.
	I1210 07:44:13.679225  412953 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1210 07:44:13.679231  412953 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1210 07:44:13.679240  412953 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1210 07:44:13.679252  412953 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1210 07:44:13.679260  412953 command_runner.go:130] > #   the values from the default runtime on load time.
	I1210 07:44:13.679267  412953 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1210 07:44:13.679274  412953 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1210 07:44:13.679281  412953 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1210 07:44:13.679291  412953 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1210 07:44:13.679296  412953 command_runner.go:130] > #   The currently recognized values are:
	I1210 07:44:13.679302  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1210 07:44:13.679311  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1210 07:44:13.679325  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1210 07:44:13.679338  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1210 07:44:13.679345  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1210 07:44:13.679357  412953 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1210 07:44:13.679365  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1210 07:44:13.679374  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1210 07:44:13.679380  412953 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1210 07:44:13.679398  412953 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1210 07:44:13.679409  412953 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1210 07:44:13.679420  412953 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1210 07:44:13.679430  412953 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1210 07:44:13.679436  412953 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1210 07:44:13.679445  412953 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1210 07:44:13.679452  412953 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1210 07:44:13.679461  412953 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1210 07:44:13.679464  412953 command_runner.go:130] > #   deprecated option "conmon".
	I1210 07:44:13.679478  412953 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1210 07:44:13.679487  412953 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1210 07:44:13.679493  412953 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1210 07:44:13.679503  412953 command_runner.go:130] > #   should be moved to the container's cgroup
	I1210 07:44:13.679511  412953 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1210 07:44:13.679518  412953 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1210 07:44:13.679525  412953 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1210 07:44:13.679531  412953 command_runner.go:130] > #   conmon-rs by using:
	I1210 07:44:13.679539  412953 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1210 07:44:13.679560  412953 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1210 07:44:13.679570  412953 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1210 07:44:13.679579  412953 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1210 07:44:13.679584  412953 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1210 07:44:13.679593  412953 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1210 07:44:13.679603  412953 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1210 07:44:13.679608  412953 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1210 07:44:13.679617  412953 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1210 07:44:13.679637  412953 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1210 07:44:13.679641  412953 command_runner.go:130] > #   when a machine crash happens.
	I1210 07:44:13.679649  412953 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1210 07:44:13.679659  412953 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1210 07:44:13.679667  412953 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1210 07:44:13.679675  412953 command_runner.go:130] > #   seccomp profile for the runtime.
	I1210 07:44:13.679681  412953 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1210 07:44:13.679700  412953 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1210 07:44:13.679707  412953 command_runner.go:130] > #
	I1210 07:44:13.679712  412953 command_runner.go:130] > # Using the seccomp notifier feature:
	I1210 07:44:13.679716  412953 command_runner.go:130] > #
	I1210 07:44:13.679727  412953 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1210 07:44:13.679736  412953 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1210 07:44:13.679742  412953 command_runner.go:130] > #
	I1210 07:44:13.679749  412953 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1210 07:44:13.679756  412953 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1210 07:44:13.679761  412953 command_runner.go:130] > #
	I1210 07:44:13.679773  412953 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1210 07:44:13.679780  412953 command_runner.go:130] > # feature.
	I1210 07:44:13.679782  412953 command_runner.go:130] > #
	I1210 07:44:13.679788  412953 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1210 07:44:13.679799  412953 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1210 07:44:13.679805  412953 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1210 07:44:13.679811  412953 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1210 07:44:13.679819  412953 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1210 07:44:13.679824  412953 command_runner.go:130] > #
	I1210 07:44:13.679831  412953 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1210 07:44:13.679840  412953 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1210 07:44:13.679848  412953 command_runner.go:130] > #
	I1210 07:44:13.679858  412953 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1210 07:44:13.679864  412953 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1210 07:44:13.679869  412953 command_runner.go:130] > #
	I1210 07:44:13.679875  412953 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1210 07:44:13.679881  412953 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1210 07:44:13.679887  412953 command_runner.go:130] > # limitation.
	I1210 07:44:13.679891  412953 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1210 07:44:13.679896  412953 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1210 07:44:13.679902  412953 command_runner.go:130] > runtime_type = ""
	I1210 07:44:13.679909  412953 command_runner.go:130] > runtime_root = "/run/crun"
	I1210 07:44:13.679913  412953 command_runner.go:130] > inherit_default_runtime = false
	I1210 07:44:13.679932  412953 command_runner.go:130] > runtime_config_path = ""
	I1210 07:44:13.679940  412953 command_runner.go:130] > container_min_memory = ""
	I1210 07:44:13.679944  412953 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1210 07:44:13.679948  412953 command_runner.go:130] > monitor_cgroup = "pod"
	I1210 07:44:13.679957  412953 command_runner.go:130] > monitor_exec_cgroup = ""
	I1210 07:44:13.679961  412953 command_runner.go:130] > allowed_annotations = [
	I1210 07:44:13.680169  412953 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1210 07:44:13.680183  412953 command_runner.go:130] > ]
	I1210 07:44:13.680190  412953 command_runner.go:130] > privileged_without_host_devices = false
	I1210 07:44:13.680195  412953 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1210 07:44:13.680200  412953 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1210 07:44:13.680204  412953 command_runner.go:130] > runtime_type = ""
	I1210 07:44:13.680218  412953 command_runner.go:130] > runtime_root = "/run/runc"
	I1210 07:44:13.680228  412953 command_runner.go:130] > inherit_default_runtime = false
	I1210 07:44:13.680233  412953 command_runner.go:130] > runtime_config_path = ""
	I1210 07:44:13.680237  412953 command_runner.go:130] > container_min_memory = ""
	I1210 07:44:13.680244  412953 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1210 07:44:13.680248  412953 command_runner.go:130] > monitor_cgroup = "pod"
	I1210 07:44:13.680257  412953 command_runner.go:130] > monitor_exec_cgroup = ""
	I1210 07:44:13.680461  412953 command_runner.go:130] > privileged_without_host_devices = false
	I1210 07:44:13.680480  412953 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1210 07:44:13.680486  412953 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1210 07:44:13.680503  412953 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1210 07:44:13.680522  412953 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1210 07:44:13.680533  412953 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1210 07:44:13.680547  412953 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1210 07:44:13.680554  412953 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1210 07:44:13.680563  412953 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1210 07:44:13.680579  412953 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1210 07:44:13.680591  412953 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1210 07:44:13.680597  412953 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1210 07:44:13.680609  412953 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1210 07:44:13.680613  412953 command_runner.go:130] > # Example:
	I1210 07:44:13.680617  412953 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1210 07:44:13.680625  412953 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1210 07:44:13.680632  412953 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1210 07:44:13.680643  412953 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1210 07:44:13.680656  412953 command_runner.go:130] > # cpuset = "0-1"
	I1210 07:44:13.680660  412953 command_runner.go:130] > # cpushares = "5"
	I1210 07:44:13.680672  412953 command_runner.go:130] > # cpuquota = "1000"
	I1210 07:44:13.680676  412953 command_runner.go:130] > # cpuperiod = "100000"
	I1210 07:44:13.680680  412953 command_runner.go:130] > # cpulimit = "35"
	I1210 07:44:13.680686  412953 command_runner.go:130] > # Where:
	I1210 07:44:13.680691  412953 command_runner.go:130] > # The workload name is workload-type.
	I1210 07:44:13.680706  412953 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1210 07:44:13.680717  412953 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1210 07:44:13.680730  412953 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1210 07:44:13.680742  412953 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1210 07:44:13.680748  412953 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1210 07:44:13.680756  412953 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1210 07:44:13.680763  412953 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1210 07:44:13.680767  412953 command_runner.go:130] > # Default value is set to true
	I1210 07:44:13.681004  412953 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1210 07:44:13.681022  412953 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1210 07:44:13.681028  412953 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1210 07:44:13.681032  412953 command_runner.go:130] > # Default value is set to 'false'
	I1210 07:44:13.681046  412953 command_runner.go:130] > # disable_hostport_mapping = false
	I1210 07:44:13.681057  412953 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1210 07:44:13.681066  412953 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1210 07:44:13.681072  412953 command_runner.go:130] > # timezone = ""
	I1210 07:44:13.681078  412953 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1210 07:44:13.681082  412953 command_runner.go:130] > #
	I1210 07:44:13.681089  412953 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1210 07:44:13.681101  412953 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1210 07:44:13.681105  412953 command_runner.go:130] > [crio.image]
	I1210 07:44:13.681112  412953 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1210 07:44:13.681133  412953 command_runner.go:130] > # default_transport = "docker://"
	I1210 07:44:13.681145  412953 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1210 07:44:13.681152  412953 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1210 07:44:13.681158  412953 command_runner.go:130] > # global_auth_file = ""
	I1210 07:44:13.681163  412953 command_runner.go:130] > # The image used to instantiate infra containers.
	I1210 07:44:13.681168  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.681175  412953 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1210 07:44:13.681182  412953 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1210 07:44:13.681198  412953 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1210 07:44:13.681207  412953 command_runner.go:130] > # This option supports live configuration reload.
	I1210 07:44:13.681403  412953 command_runner.go:130] > # pause_image_auth_file = ""
	I1210 07:44:13.681421  412953 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1210 07:44:13.681429  412953 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1210 07:44:13.681436  412953 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1210 07:44:13.681442  412953 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1210 07:44:13.681460  412953 command_runner.go:130] > # pause_command = "/pause"
	I1210 07:44:13.681466  412953 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1210 07:44:13.681473  412953 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1210 07:44:13.681481  412953 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1210 07:44:13.681487  412953 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1210 07:44:13.681495  412953 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1210 07:44:13.681508  412953 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1210 07:44:13.681512  412953 command_runner.go:130] > # pinned_images = [
	I1210 07:44:13.681700  412953 command_runner.go:130] > # ]
	I1210 07:44:13.681712  412953 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1210 07:44:13.681720  412953 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1210 07:44:13.681726  412953 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1210 07:44:13.681733  412953 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1210 07:44:13.681759  412953 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1210 07:44:13.681771  412953 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1210 07:44:13.681777  412953 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1210 07:44:13.681786  412953 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1210 07:44:13.681793  412953 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1210 07:44:13.681800  412953 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1210 07:44:13.681806  412953 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1210 07:44:13.682016  412953 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1210 07:44:13.682034  412953 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1210 07:44:13.682042  412953 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1210 07:44:13.682046  412953 command_runner.go:130] > # changing them here.
	I1210 07:44:13.682052  412953 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1210 07:44:13.682069  412953 command_runner.go:130] > # insecure_registries = [
	I1210 07:44:13.682078  412953 command_runner.go:130] > # ]
	I1210 07:44:13.682085  412953 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1210 07:44:13.682090  412953 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1210 07:44:13.682257  412953 command_runner.go:130] > # image_volumes = "mkdir"
	I1210 07:44:13.682273  412953 command_runner.go:130] > # Temporary directory to use for storing big files
	I1210 07:44:13.682285  412953 command_runner.go:130] > # big_files_temporary_dir = ""
	I1210 07:44:13.682292  412953 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1210 07:44:13.682299  412953 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1210 07:44:13.682504  412953 command_runner.go:130] > # auto_reload_registries = false
	I1210 07:44:13.682520  412953 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1210 07:44:13.682532  412953 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1210 07:44:13.682540  412953 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1210 07:44:13.682567  412953 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1210 07:44:13.682578  412953 command_runner.go:130] > # The mode of short name resolution.
	I1210 07:44:13.682585  412953 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1210 07:44:13.682595  412953 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1210 07:44:13.682600  412953 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1210 07:44:13.682615  412953 command_runner.go:130] > # short_name_mode = "enforcing"
	I1210 07:44:13.682622  412953 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1210 07:44:13.682630  412953 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1210 07:44:13.683045  412953 command_runner.go:130] > # oci_artifact_mount_support = true
	I1210 07:44:13.683063  412953 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1210 07:44:13.683080  412953 command_runner.go:130] > # CNI plugins.
	I1210 07:44:13.683084  412953 command_runner.go:130] > [crio.network]
	I1210 07:44:13.683091  412953 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1210 07:44:13.683100  412953 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1210 07:44:13.683104  412953 command_runner.go:130] > # cni_default_network = ""
	I1210 07:44:13.683110  412953 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1210 07:44:13.683116  412953 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1210 07:44:13.683122  412953 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1210 07:44:13.683126  412953 command_runner.go:130] > # plugin_dirs = [
	I1210 07:44:13.683439  412953 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1210 07:44:13.683727  412953 command_runner.go:130] > # ]
	I1210 07:44:13.683742  412953 command_runner.go:130] > # List of included pod metrics.
	I1210 07:44:13.684014  412953 command_runner.go:130] > # included_pod_metrics = [
	I1210 07:44:13.684312  412953 command_runner.go:130] > # ]
	I1210 07:44:13.684328  412953 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1210 07:44:13.684333  412953 command_runner.go:130] > [crio.metrics]
	I1210 07:44:13.684339  412953 command_runner.go:130] > # Globally enable or disable metrics support.
	I1210 07:44:13.684905  412953 command_runner.go:130] > # enable_metrics = false
	I1210 07:44:13.684921  412953 command_runner.go:130] > # Specify enabled metrics collectors.
	I1210 07:44:13.684926  412953 command_runner.go:130] > # Per default all metrics are enabled.
	I1210 07:44:13.684933  412953 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1210 07:44:13.684946  412953 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1210 07:44:13.684969  412953 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1210 07:44:13.685240  412953 command_runner.go:130] > # metrics_collectors = [
	I1210 07:44:13.685580  412953 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1210 07:44:13.685893  412953 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1210 07:44:13.686203  412953 command_runner.go:130] > # 	"containers_oom_total",
	I1210 07:44:13.686514  412953 command_runner.go:130] > # 	"processes_defunct",
	I1210 07:44:13.686821  412953 command_runner.go:130] > # 	"operations_total",
	I1210 07:44:13.687152  412953 command_runner.go:130] > # 	"operations_latency_seconds",
	I1210 07:44:13.687476  412953 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1210 07:44:13.687786  412953 command_runner.go:130] > # 	"operations_errors_total",
	I1210 07:44:13.688090  412953 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1210 07:44:13.688395  412953 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1210 07:44:13.688727  412953 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1210 07:44:13.689070  412953 command_runner.go:130] > # 	"image_pulls_success_total",
	I1210 07:44:13.689083  412953 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1210 07:44:13.689089  412953 command_runner.go:130] > # 	"containers_oom_count_total",
	I1210 07:44:13.689093  412953 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1210 07:44:13.689098  412953 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1210 07:44:13.689634  412953 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1210 07:44:13.689646  412953 command_runner.go:130] > # ]
	I1210 07:44:13.689654  412953 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1210 07:44:13.689658  412953 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1210 07:44:13.689671  412953 command_runner.go:130] > # The port on which the metrics server will listen.
	I1210 07:44:13.689696  412953 command_runner.go:130] > # metrics_port = 9090
	I1210 07:44:13.689701  412953 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1210 07:44:13.689706  412953 command_runner.go:130] > # metrics_socket = ""
	I1210 07:44:13.689716  412953 command_runner.go:130] > # The certificate for the secure metrics server.
	I1210 07:44:13.689722  412953 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1210 07:44:13.689731  412953 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1210 07:44:13.689737  412953 command_runner.go:130] > # certificate on any modification event.
	I1210 07:44:13.689741  412953 command_runner.go:130] > # metrics_cert = ""
	I1210 07:44:13.689746  412953 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1210 07:44:13.689751  412953 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1210 07:44:13.689764  412953 command_runner.go:130] > # metrics_key = ""
	I1210 07:44:13.689770  412953 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1210 07:44:13.689774  412953 command_runner.go:130] > [crio.tracing]
	I1210 07:44:13.689781  412953 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1210 07:44:13.689785  412953 command_runner.go:130] > # enable_tracing = false
	I1210 07:44:13.689792  412953 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1210 07:44:13.689799  412953 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1210 07:44:13.689806  412953 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1210 07:44:13.689833  412953 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1210 07:44:13.689842  412953 command_runner.go:130] > # CRI-O NRI configuration.
	I1210 07:44:13.689845  412953 command_runner.go:130] > [crio.nri]
	I1210 07:44:13.689850  412953 command_runner.go:130] > # Globally enable or disable NRI.
	I1210 07:44:13.689861  412953 command_runner.go:130] > # enable_nri = true
	I1210 07:44:13.689865  412953 command_runner.go:130] > # NRI socket to listen on.
	I1210 07:44:13.689873  412953 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1210 07:44:13.689877  412953 command_runner.go:130] > # NRI plugin directory to use.
	I1210 07:44:13.689882  412953 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1210 07:44:13.689890  412953 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1210 07:44:13.689894  412953 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1210 07:44:13.689900  412953 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1210 07:44:13.689965  412953 command_runner.go:130] > # nri_disable_connections = false
	I1210 07:44:13.689975  412953 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1210 07:44:13.689991  412953 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1210 07:44:13.689997  412953 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1210 07:44:13.690006  412953 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1210 07:44:13.690011  412953 command_runner.go:130] > # NRI default validator configuration.
	I1210 07:44:13.690018  412953 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1210 07:44:13.690027  412953 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1210 07:44:13.690036  412953 command_runner.go:130] > # can be restricted/rejected:
	I1210 07:44:13.690044  412953 command_runner.go:130] > # - OCI hook injection
	I1210 07:44:13.690060  412953 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1210 07:44:13.690068  412953 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1210 07:44:13.690072  412953 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1210 07:44:13.690076  412953 command_runner.go:130] > # - adjustment of linux namespaces
	I1210 07:44:13.690083  412953 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1210 07:44:13.690093  412953 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1210 07:44:13.690099  412953 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1210 07:44:13.690107  412953 command_runner.go:130] > #
	I1210 07:44:13.690111  412953 command_runner.go:130] > # [crio.nri.default_validator]
	I1210 07:44:13.690115  412953 command_runner.go:130] > # nri_enable_default_validator = false
	I1210 07:44:13.690122  412953 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1210 07:44:13.690134  412953 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1210 07:44:13.690148  412953 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1210 07:44:13.690154  412953 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1210 07:44:13.690159  412953 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1210 07:44:13.690165  412953 command_runner.go:130] > # nri_validator_required_plugins = [
	I1210 07:44:13.690168  412953 command_runner.go:130] > # ]
	I1210 07:44:13.690174  412953 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1210 07:44:13.690182  412953 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1210 07:44:13.690192  412953 command_runner.go:130] > [crio.stats]
	I1210 07:44:13.690198  412953 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1210 07:44:13.690212  412953 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1210 07:44:13.690219  412953 command_runner.go:130] > # stats_collection_period = 0
	I1210 07:44:13.690225  412953 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1210 07:44:13.690232  412953 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1210 07:44:13.690243  412953 command_runner.go:130] > # collection_period = 0
	I1210 07:44:13.692149  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.648702659Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1210 07:44:13.692177  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.648881459Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1210 07:44:13.692188  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.648978856Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1210 07:44:13.692196  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.649067965Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1210 07:44:13.692212  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.649235303Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:44:13.692221  412953 command_runner.go:130] ! time="2025-12-10T07:44:13.649618857Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1210 07:44:13.692237  412953 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1210 07:44:13.692317  412953 cni.go:84] Creating CNI manager for ""
	I1210 07:44:13.692335  412953 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:44:13.692359  412953 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:44:13.692385  412953 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-314220 NodeName:functional-314220 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:44:13.692523  412953 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-314220"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:44:13.692606  412953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:44:13.699318  412953 command_runner.go:130] > kubeadm
	I1210 07:44:13.699338  412953 command_runner.go:130] > kubectl
	I1210 07:44:13.699343  412953 command_runner.go:130] > kubelet
	I1210 07:44:13.700197  412953 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:44:13.700295  412953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:44:13.707538  412953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 07:44:13.720130  412953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:44:13.732445  412953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1210 07:44:13.744899  412953 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:44:13.748570  412953 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1210 07:44:13.748818  412953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:44:13.875367  412953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:44:13.911048  412953 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220 for IP: 192.168.49.2
	I1210 07:44:13.911077  412953 certs.go:195] generating shared ca certs ...
	I1210 07:44:13.911094  412953 certs.go:227] acquiring lock for ca certs: {Name:mk8f0c12407c084bd20b81428ac0dabca9a8cbd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:44:13.911231  412953 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key
	I1210 07:44:13.911285  412953 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key
	I1210 07:44:13.911297  412953 certs.go:257] generating profile certs ...
	I1210 07:44:13.911404  412953 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key
	I1210 07:44:13.911477  412953 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key.8ae59347
	I1210 07:44:13.911525  412953 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key
	I1210 07:44:13.911539  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 07:44:13.911552  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 07:44:13.911567  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 07:44:13.911578  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 07:44:13.911593  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 07:44:13.911610  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 07:44:13.911622  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 07:44:13.911637  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 07:44:13.911683  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem (1338 bytes)
	W1210 07:44:13.911717  412953 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528_empty.pem, impossibly tiny 0 bytes
	I1210 07:44:13.911729  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:44:13.911762  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:44:13.911791  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:44:13.911819  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem (1675 bytes)
	I1210 07:44:13.911865  412953 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 07:44:13.911900  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> /usr/share/ca-certificates/3785282.pem
	I1210 07:44:13.911918  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:13.911928  412953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem -> /usr/share/ca-certificates/378528.pem
	I1210 07:44:13.912577  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:44:13.931574  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 07:44:13.949287  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:44:13.966704  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:44:13.984537  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:44:14.005273  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:44:14.024726  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:44:14.043246  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:44:14.061500  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /usr/share/ca-certificates/3785282.pem (1708 bytes)
	I1210 07:44:14.078597  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:44:14.096003  412953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem --> /usr/share/ca-certificates/378528.pem (1338 bytes)
	I1210 07:44:14.113316  412953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:44:14.125784  412953 ssh_runner.go:195] Run: openssl version
	I1210 07:44:14.132223  412953 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1210 07:44:14.132300  412953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.139621  412953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3785282.pem /etc/ssl/certs/3785282.pem
	I1210 07:44:14.146891  412953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.150749  412953 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 10 07:35 /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.150804  412953 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 07:35 /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.150854  412953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3785282.pem
	I1210 07:44:14.191223  412953 command_runner.go:130] > 3ec20f2e
	I1210 07:44:14.191672  412953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:44:14.199095  412953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.206573  412953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:44:14.214321  412953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.218345  412953 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 10 07:25 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.218446  412953 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 07:25 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.218516  412953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:44:14.259240  412953 command_runner.go:130] > b5213941
	I1210 07:44:14.259776  412953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:44:14.267399  412953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.274814  412953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/378528.pem /etc/ssl/certs/378528.pem
	I1210 07:44:14.282253  412953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.286034  412953 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 10 07:35 /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.286101  412953 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 07:35 /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.286170  412953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378528.pem
	I1210 07:44:14.327536  412953 command_runner.go:130] > 51391683
	I1210 07:44:14.327674  412953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:44:14.335034  412953 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:44:14.338581  412953 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:44:14.338609  412953 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1210 07:44:14.338616  412953 command_runner.go:130] > Device: 259,1	Inode: 1322411     Links: 1
	I1210 07:44:14.338623  412953 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 07:44:14.338628  412953 command_runner.go:130] > Access: 2025-12-10 07:40:07.276287392 +0000
	I1210 07:44:14.338634  412953 command_runner.go:130] > Modify: 2025-12-10 07:36:02.667723996 +0000
	I1210 07:44:14.338639  412953 command_runner.go:130] > Change: 2025-12-10 07:36:02.667723996 +0000
	I1210 07:44:14.338644  412953 command_runner.go:130] >  Birth: 2025-12-10 07:36:02.667723996 +0000
	I1210 07:44:14.338702  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:44:14.379186  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.379683  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:44:14.420781  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.421255  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:44:14.461926  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.462055  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:44:14.509912  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.510522  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:44:14.558004  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.558477  412953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:44:14.599044  412953 command_runner.go:130] > Certificate will not expire
	I1210 07:44:14.599455  412953 kubeadm.go:401] StartCluster: {Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:44:14.599550  412953 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:44:14.599615  412953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:44:14.630244  412953 cri.go:89] found id: ""
	I1210 07:44:14.630352  412953 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:44:14.638132  412953 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1210 07:44:14.638152  412953 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1210 07:44:14.638158  412953 command_runner.go:130] > /var/lib/minikube/etcd:
	I1210 07:44:14.638171  412953 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:44:14.638176  412953 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:44:14.638225  412953 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:44:14.645608  412953 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:44:14.646002  412953 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-314220" does not appear in /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:14.646112  412953 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-376671/kubeconfig needs updating (will repair): [kubeconfig missing "functional-314220" cluster setting kubeconfig missing "functional-314220" context setting]
	I1210 07:44:14.646387  412953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/kubeconfig: {Name:mk62f1b4a63164643ed31a5211abdadfc49db863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:44:14.646808  412953 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:14.646962  412953 kapi.go:59] client config for functional-314220: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key", CAFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 07:44:14.647769  412953 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 07:44:14.647791  412953 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 07:44:14.647797  412953 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 07:44:14.647801  412953 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 07:44:14.647806  412953 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 07:44:14.647858  412953 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1210 07:44:14.648134  412953 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:44:14.656007  412953 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1210 07:44:14.656041  412953 kubeadm.go:602] duration metric: took 17.859608ms to restartPrimaryControlPlane
	I1210 07:44:14.656051  412953 kubeadm.go:403] duration metric: took 56.601079ms to StartCluster
	I1210 07:44:14.656066  412953 settings.go:142] acquiring lock: {Name:mk83336eaf1e9f7632e16e15e8d9e14eb0e0d0c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:44:14.656132  412953 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:14.656799  412953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/kubeconfig: {Name:mk62f1b4a63164643ed31a5211abdadfc49db863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:44:14.657004  412953 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 07:44:14.657416  412953 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:44:14.657431  412953 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 07:44:14.658092  412953 addons.go:70] Setting storage-provisioner=true in profile "functional-314220"
	I1210 07:44:14.658110  412953 addons.go:239] Setting addon storage-provisioner=true in "functional-314220"
	I1210 07:44:14.658137  412953 host.go:66] Checking if "functional-314220" exists ...
	I1210 07:44:14.658702  412953 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:44:14.665050  412953 addons.go:70] Setting default-storageclass=true in profile "functional-314220"
	I1210 07:44:14.665125  412953 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-314220"
	I1210 07:44:14.665550  412953 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:44:14.671074  412953 out.go:179] * Verifying Kubernetes components...
	I1210 07:44:14.676445  412953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:44:14.698425  412953 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 07:44:14.701187  412953 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:14.701211  412953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 07:44:14.701278  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:14.705662  412953 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:44:14.705841  412953 kapi.go:59] client config for functional-314220: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key", CAFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 07:44:14.706176  412953 addons.go:239] Setting addon default-storageclass=true in "functional-314220"
	I1210 07:44:14.706207  412953 host.go:66] Checking if "functional-314220" exists ...
	I1210 07:44:14.706646  412953 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:44:14.744732  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:14.744810  412953 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:14.744830  412953 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 07:44:14.744900  412953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:44:14.778977  412953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:44:14.876345  412953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:44:14.912899  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:14.922881  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:15.662190  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:15.662227  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.662277  412953 retry.go:31] will retry after 311.954263ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.662347  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:15.662381  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.662389  412953 retry.go:31] will retry after 234.07921ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.662447  412953 node_ready.go:35] waiting up to 6m0s for node "functional-314220" to be "Ready" ...
	I1210 07:44:15.662665  412953 type.go:168] "Request Body" body=""
	I1210 07:44:15.662765  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:15.663157  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:15.897488  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:15.957295  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:15.957408  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.957431  412953 retry.go:31] will retry after 307.155853ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:15.974530  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:16.030916  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:16.034621  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.034655  412953 retry.go:31] will retry after 246.948718ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.162840  412953 type.go:168] "Request Body" body=""
	I1210 07:44:16.162973  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:16.163310  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:16.265735  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:16.282284  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:16.335651  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:16.339071  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.339103  412953 retry.go:31] will retry after 647.058742ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.361763  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:16.361804  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.361822  412953 retry.go:31] will retry after 514.560746ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.663231  412953 type.go:168] "Request Body" body=""
	I1210 07:44:16.663327  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:16.663641  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:16.877219  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:16.942769  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:16.942876  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.942918  412953 retry.go:31] will retry after 1.098847883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:16.987296  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:17.051987  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:17.055923  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:17.055964  412953 retry.go:31] will retry after 522.145884ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:17.163324  412953 type.go:168] "Request Body" body=""
	I1210 07:44:17.163405  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:17.163711  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:17.578391  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:17.635896  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:17.639746  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:17.639777  412953 retry.go:31] will retry after 768.766099ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:17.662946  412953 type.go:168] "Request Body" body=""
	I1210 07:44:17.663049  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:17.663399  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:17.663474  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:18.042986  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:18.101043  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:18.104777  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:18.104811  412953 retry.go:31] will retry after 877.527078ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:18.163066  412953 type.go:168] "Request Body" body=""
	I1210 07:44:18.163146  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:18.163494  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:18.409040  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:18.473157  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:18.473195  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:18.473221  412953 retry.go:31] will retry after 1.043117699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:18.663503  412953 type.go:168] "Request Body" body=""
	I1210 07:44:18.663629  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:18.663908  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:18.983598  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:19.054379  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:19.057795  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:19.057861  412953 retry.go:31] will retry after 2.806616267s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:19.163140  412953 type.go:168] "Request Body" body=""
	I1210 07:44:19.163219  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:19.163514  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:19.517094  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:19.577109  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:19.577146  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:19.577191  412953 retry.go:31] will retry after 2.260515502s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:19.663401  412953 type.go:168] "Request Body" body=""
	I1210 07:44:19.663487  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:19.663841  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:19.663910  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:20.163656  412953 type.go:168] "Request Body" body=""
	I1210 07:44:20.163728  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:20.164096  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:20.662791  412953 type.go:168] "Request Body" body=""
	I1210 07:44:20.662881  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:20.663196  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:21.162812  412953 type.go:168] "Request Body" body=""
	I1210 07:44:21.162886  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:21.163185  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:21.662729  412953 type.go:168] "Request Body" body=""
	I1210 07:44:21.662808  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:21.663095  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:21.838627  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:21.865153  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:21.916464  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:21.916504  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:21.916523  412953 retry.go:31] will retry after 2.650338189s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:21.931641  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:21.931686  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:21.931712  412953 retry.go:31] will retry after 2.932548046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:22.163174  412953 type.go:168] "Request Body" body=""
	I1210 07:44:22.163252  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:22.163593  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:22.163668  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:22.663491  412953 type.go:168] "Request Body" body=""
	I1210 07:44:22.663596  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:22.663955  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:23.162683  412953 type.go:168] "Request Body" body=""
	I1210 07:44:23.162754  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:23.163096  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:23.662729  412953 type.go:168] "Request Body" body=""
	I1210 07:44:23.662804  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:23.663201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:24.162801  412953 type.go:168] "Request Body" body=""
	I1210 07:44:24.162914  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:24.163280  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:24.567824  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:24.621746  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:24.625216  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:24.625246  412953 retry.go:31] will retry after 7.727905191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:24.663687  412953 type.go:168] "Request Body" body=""
	I1210 07:44:24.663760  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:24.664012  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:24.664064  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:24.864476  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:24.921495  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:24.921557  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:24.921581  412953 retry.go:31] will retry after 3.915945796s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:25.162916  412953 type.go:168] "Request Body" body=""
	I1210 07:44:25.162995  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:25.163327  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:25.663045  412953 type.go:168] "Request Body" body=""
	I1210 07:44:25.663124  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:25.663415  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:26.163115  412953 type.go:168] "Request Body" body=""
	I1210 07:44:26.163196  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:26.163520  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:26.663439  412953 type.go:168] "Request Body" body=""
	I1210 07:44:26.663518  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:26.663824  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:27.163578  412953 type.go:168] "Request Body" body=""
	I1210 07:44:27.163681  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:27.164000  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:27.164069  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:27.662756  412953 type.go:168] "Request Body" body=""
	I1210 07:44:27.662823  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:27.663205  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:28.162790  412953 type.go:168] "Request Body" body=""
	I1210 07:44:28.162864  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:28.163219  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:28.662792  412953 type.go:168] "Request Body" body=""
	I1210 07:44:28.662869  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:28.663208  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:28.838651  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:28.899244  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:28.899280  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:28.899298  412953 retry.go:31] will retry after 8.041674514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:29.162702  412953 type.go:168] "Request Body" body=""
	I1210 07:44:29.162772  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:29.163052  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:29.662768  412953 type.go:168] "Request Body" body=""
	I1210 07:44:29.662841  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:29.663169  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:29.663226  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:30.162886  412953 type.go:168] "Request Body" body=""
	I1210 07:44:30.162968  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:30.163308  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:30.662996  412953 type.go:168] "Request Body" body=""
	I1210 07:44:30.663089  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:30.663373  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:31.163117  412953 type.go:168] "Request Body" body=""
	I1210 07:44:31.163198  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:31.163590  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:31.662807  412953 type.go:168] "Request Body" body=""
	I1210 07:44:31.662884  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:31.663200  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:31.663248  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:32.162759  412953 type.go:168] "Request Body" body=""
	I1210 07:44:32.162841  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:32.163175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:32.353668  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:32.409993  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:32.413403  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:32.413432  412953 retry.go:31] will retry after 6.914628842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:32.662780  412953 type.go:168] "Request Body" body=""
	I1210 07:44:32.662856  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:32.663233  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:33.162956  412953 type.go:168] "Request Body" body=""
	I1210 07:44:33.163049  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:33.163356  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:33.662689  412953 type.go:168] "Request Body" body=""
	I1210 07:44:33.662755  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:33.663031  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:34.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:44:34.162848  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:34.163199  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:34.163258  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:34.663111  412953 type.go:168] "Request Body" body=""
	I1210 07:44:34.663191  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:34.663487  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:35.163272  412953 type.go:168] "Request Body" body=""
	I1210 07:44:35.163341  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:35.163701  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:35.663625  412953 type.go:168] "Request Body" body=""
	I1210 07:44:35.663709  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:35.664060  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:36.162762  412953 type.go:168] "Request Body" body=""
	I1210 07:44:36.162838  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:36.163191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:36.663557  412953 type.go:168] "Request Body" body=""
	I1210 07:44:36.663625  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:36.663891  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:36.663931  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:36.941565  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:36.998306  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:37.009698  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:37.009736  412953 retry.go:31] will retry after 8.728706472s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:37.163096  412953 type.go:168] "Request Body" body=""
	I1210 07:44:37.163180  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:37.163526  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:37.663088  412953 type.go:168] "Request Body" body=""
	I1210 07:44:37.663168  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:37.663465  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:38.162753  412953 type.go:168] "Request Body" body=""
	I1210 07:44:38.162830  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:38.163170  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:38.662738  412953 type.go:168] "Request Body" body=""
	I1210 07:44:38.662816  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:38.663200  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:39.162911  412953 type.go:168] "Request Body" body=""
	I1210 07:44:39.162982  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:39.163311  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:39.163365  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:39.328689  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:39.391413  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:39.391461  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:39.391479  412953 retry.go:31] will retry after 20.069023813s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:39.663623  412953 type.go:168] "Request Body" body=""
	I1210 07:44:39.663692  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:39.664007  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:40.163717  412953 type.go:168] "Request Body" body=""
	I1210 07:44:40.163789  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:40.164098  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:40.662854  412953 type.go:168] "Request Body" body=""
	I1210 07:44:40.662926  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:40.663261  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:41.163240  412953 type.go:168] "Request Body" body=""
	I1210 07:44:41.163310  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:41.163588  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:41.163638  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:41.663374  412953 type.go:168] "Request Body" body=""
	I1210 07:44:41.663448  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:41.663787  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:42.163614  412953 type.go:168] "Request Body" body=""
	I1210 07:44:42.163700  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:42.164110  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:42.662817  412953 type.go:168] "Request Body" body=""
	I1210 07:44:42.662893  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:42.663220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:43.162788  412953 type.go:168] "Request Body" body=""
	I1210 07:44:43.162930  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:43.163267  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:43.662791  412953 type.go:168] "Request Body" body=""
	I1210 07:44:43.662874  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:43.663242  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:43.663300  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:44.162963  412953 type.go:168] "Request Body" body=""
	I1210 07:44:44.163057  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:44.163345  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:44.663123  412953 type.go:168] "Request Body" body=""
	I1210 07:44:44.663199  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:44.663539  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:45.163160  412953 type.go:168] "Request Body" body=""
	I1210 07:44:45.163248  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:45.163618  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:45.663558  412953 type.go:168] "Request Body" body=""
	I1210 07:44:45.663640  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:45.663930  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:45.663983  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:45.739308  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:44:45.803966  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:45.804014  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:45.804032  412953 retry.go:31] will retry after 15.619557427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:46.163368  412953 type.go:168] "Request Body" body=""
	I1210 07:44:46.163449  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:46.163809  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:46.663723  412953 type.go:168] "Request Body" body=""
	I1210 07:44:46.663804  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:46.664157  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:47.162830  412953 type.go:168] "Request Body" body=""
	I1210 07:44:47.162904  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:47.163246  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:47.662803  412953 type.go:168] "Request Body" body=""
	I1210 07:44:47.662878  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:47.663201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:48.162784  412953 type.go:168] "Request Body" body=""
	I1210 07:44:48.162868  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:48.163231  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:48.163295  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:48.662914  412953 type.go:168] "Request Body" body=""
	I1210 07:44:48.662989  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:48.663322  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:49.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:44:49.162856  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:49.163180  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:49.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:44:49.662861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:49.663191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:50.162736  412953 type.go:168] "Request Body" body=""
	I1210 07:44:50.162810  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:50.163100  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:50.662994  412953 type.go:168] "Request Body" body=""
	I1210 07:44:50.663140  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:50.663480  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:50.663536  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:51.163315  412953 type.go:168] "Request Body" body=""
	I1210 07:44:51.163397  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:51.163738  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:51.663484  412953 type.go:168] "Request Body" body=""
	I1210 07:44:51.663554  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:51.663817  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:52.163592  412953 type.go:168] "Request Body" body=""
	I1210 07:44:52.163675  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:52.164024  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:52.663725  412953 type.go:168] "Request Body" body=""
	I1210 07:44:52.663805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:52.664170  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:52.664269  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:53.162915  412953 type.go:168] "Request Body" body=""
	I1210 07:44:53.162989  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:53.163353  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:53.662767  412953 type.go:168] "Request Body" body=""
	I1210 07:44:53.662844  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:53.663173  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:54.162859  412953 type.go:168] "Request Body" body=""
	I1210 07:44:54.162935  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:54.163287  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:54.663094  412953 type.go:168] "Request Body" body=""
	I1210 07:44:54.663170  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:54.663454  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:55.163141  412953 type.go:168] "Request Body" body=""
	I1210 07:44:55.163215  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:55.163544  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:55.163602  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:55.663472  412953 type.go:168] "Request Body" body=""
	I1210 07:44:55.663544  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:55.663857  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:56.163640  412953 type.go:168] "Request Body" body=""
	I1210 07:44:56.163716  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:56.163996  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:56.662721  412953 type.go:168] "Request Body" body=""
	I1210 07:44:56.662805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:56.663197  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:57.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:44:57.162851  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:57.163199  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:57.662711  412953 type.go:168] "Request Body" body=""
	I1210 07:44:57.662787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:57.663158  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:57.663213  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:44:58.162868  412953 type.go:168] "Request Body" body=""
	I1210 07:44:58.162941  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:58.163282  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:58.662753  412953 type.go:168] "Request Body" body=""
	I1210 07:44:58.662830  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:58.663191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:59.162735  412953 type.go:168] "Request Body" body=""
	I1210 07:44:59.162805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:59.163143  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:44:59.460756  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:44:59.515959  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:44:59.519313  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:59.519349  412953 retry.go:31] will retry after 28.214559207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:44:59.663650  412953 type.go:168] "Request Body" body=""
	I1210 07:44:59.663726  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:44:59.664046  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:44:59.664099  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:00.162860  412953 type.go:168] "Request Body" body=""
	I1210 07:45:00.162952  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:00.163293  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:00.671201  412953 type.go:168] "Request Body" body=""
	I1210 07:45:00.671283  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:00.671619  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:01.163405  412953 type.go:168] "Request Body" body=""
	I1210 07:45:01.163498  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:01.163856  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:01.424291  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:45:01.504370  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:01.504408  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:01.504426  412953 retry.go:31] will retry after 11.28420248s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:01.662884  412953 type.go:168] "Request Body" body=""
	I1210 07:45:01.662972  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:01.663364  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:02.162730  412953 type.go:168] "Request Body" body=""
	I1210 07:45:02.162802  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:02.163079  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:02.163130  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:02.662859  412953 type.go:168] "Request Body" body=""
	I1210 07:45:02.662943  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:02.663293  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:03.163040  412953 type.go:168] "Request Body" body=""
	I1210 07:45:03.163132  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:03.163447  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:03.663127  412953 type.go:168] "Request Body" body=""
	I1210 07:45:03.663202  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:03.663476  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:04.162835  412953 type.go:168] "Request Body" body=""
	I1210 07:45:04.162911  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:04.163270  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:04.163341  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:04.663266  412953 type.go:168] "Request Body" body=""
	I1210 07:45:04.663342  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:04.663667  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:05.163290  412953 type.go:168] "Request Body" body=""
	I1210 07:45:05.163383  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:05.163646  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:05.663681  412953 type.go:168] "Request Body" body=""
	I1210 07:45:05.663763  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:05.664122  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:06.162850  412953 type.go:168] "Request Body" body=""
	I1210 07:45:06.162937  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:06.163320  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:06.163376  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:06.663060  412953 type.go:168] "Request Body" body=""
	I1210 07:45:06.663135  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:06.663501  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:07.163345  412953 type.go:168] "Request Body" body=""
	I1210 07:45:07.163422  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:07.163774  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:07.663630  412953 type.go:168] "Request Body" body=""
	I1210 07:45:07.663719  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:07.664148  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:08.162743  412953 type.go:168] "Request Body" body=""
	I1210 07:45:08.162831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:08.163182  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:08.662764  412953 type.go:168] "Request Body" body=""
	I1210 07:45:08.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:08.663190  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:08.663246  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:09.162931  412953 type.go:168] "Request Body" body=""
	I1210 07:45:09.163030  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:09.163428  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:09.663118  412953 type.go:168] "Request Body" body=""
	I1210 07:45:09.663196  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:09.663528  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:10.163340  412953 type.go:168] "Request Body" body=""
	I1210 07:45:10.163412  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:10.163740  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:10.663633  412953 type.go:168] "Request Body" body=""
	I1210 07:45:10.663733  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:10.664152  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:10.664210  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:11.162858  412953 type.go:168] "Request Body" body=""
	I1210 07:45:11.162931  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:11.163204  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:11.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:45:11.662878  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:11.663220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:12.162791  412953 type.go:168] "Request Body" body=""
	I1210 07:45:12.162863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:12.163204  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:12.662891  412953 type.go:168] "Request Body" body=""
	I1210 07:45:12.662962  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:12.663257  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:12.789627  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:45:12.850283  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:12.850328  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:12.850347  412953 retry.go:31] will retry after 28.725170788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:13.162732  412953 type.go:168] "Request Body" body=""
	I1210 07:45:13.162821  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:13.163228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:13.163286  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:13.662967  412953 type.go:168] "Request Body" body=""
	I1210 07:45:13.663064  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:13.663335  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:14.162706  412953 type.go:168] "Request Body" body=""
	I1210 07:45:14.162805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:14.163117  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:14.663061  412953 type.go:168] "Request Body" body=""
	I1210 07:45:14.663142  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:14.663460  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:15.163177  412953 type.go:168] "Request Body" body=""
	I1210 07:45:15.163253  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:15.163541  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:15.163586  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:15.663389  412953 type.go:168] "Request Body" body=""
	I1210 07:45:15.663465  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:15.663723  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:16.163504  412953 type.go:168] "Request Body" body=""
	I1210 07:45:16.163583  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:16.163918  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:16.663729  412953 type.go:168] "Request Body" body=""
	I1210 07:45:16.663810  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:16.664140  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:17.162749  412953 type.go:168] "Request Body" body=""
	I1210 07:45:17.162824  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:17.163138  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:17.662762  412953 type.go:168] "Request Body" body=""
	I1210 07:45:17.662834  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:17.663192  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:17.663252  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:18.162797  412953 type.go:168] "Request Body" body=""
	I1210 07:45:18.162875  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:18.163247  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:18.662659  412953 type.go:168] "Request Body" body=""
	I1210 07:45:18.662732  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:18.663041  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:19.162730  412953 type.go:168] "Request Body" body=""
	I1210 07:45:19.162836  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:19.163176  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:19.662762  412953 type.go:168] "Request Body" body=""
	I1210 07:45:19.662843  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:19.663196  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:20.162705  412953 type.go:168] "Request Body" body=""
	I1210 07:45:20.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:20.163094  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:20.163139  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:20.662988  412953 type.go:168] "Request Body" body=""
	I1210 07:45:20.663092  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:20.663402  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:21.162784  412953 type.go:168] "Request Body" body=""
	I1210 07:45:21.162867  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:21.163180  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:21.662681  412953 type.go:168] "Request Body" body=""
	I1210 07:45:21.662754  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:21.663104  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:22.162781  412953 type.go:168] "Request Body" body=""
	I1210 07:45:22.162857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:22.163226  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:22.163284  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:22.662965  412953 type.go:168] "Request Body" body=""
	I1210 07:45:22.663056  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:22.663403  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:23.162971  412953 type.go:168] "Request Body" body=""
	I1210 07:45:23.163064  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:23.163391  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:23.663117  412953 type.go:168] "Request Body" body=""
	I1210 07:45:23.663191  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:23.663529  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:24.163373  412953 type.go:168] "Request Body" body=""
	I1210 07:45:24.163447  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:24.163793  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:24.163855  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:24.663660  412953 type.go:168] "Request Body" body=""
	I1210 07:45:24.663735  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:24.664001  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:25.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:45:25.162855  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:25.163242  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:25.662975  412953 type.go:168] "Request Body" body=""
	I1210 07:45:25.663066  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:25.663402  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:26.163074  412953 type.go:168] "Request Body" body=""
	I1210 07:45:26.163154  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:26.163422  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:26.663142  412953 type.go:168] "Request Body" body=""
	I1210 07:45:26.663225  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:26.663574  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:26.663627  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:27.163384  412953 type.go:168] "Request Body" body=""
	I1210 07:45:27.163458  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:27.163787  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:27.663537  412953 type.go:168] "Request Body" body=""
	I1210 07:45:27.663608  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:27.663914  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:27.734263  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:45:27.790479  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:27.794248  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:27.794290  412953 retry.go:31] will retry after 44.751938518s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1210 07:45:28.162814  412953 type.go:168] "Request Body" body=""
	I1210 07:45:28.162897  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:28.163247  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:28.662786  412953 type.go:168] "Request Body" body=""
	I1210 07:45:28.662878  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:28.663225  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:29.162904  412953 type.go:168] "Request Body" body=""
	I1210 07:45:29.162995  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:29.163308  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:29.163369  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:29.662782  412953 type.go:168] "Request Body" body=""
	I1210 07:45:29.662858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:29.663198  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:30.162922  412953 type.go:168] "Request Body" body=""
	I1210 07:45:30.162996  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:30.163332  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:30.663188  412953 type.go:168] "Request Body" body=""
	I1210 07:45:30.663257  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:30.663510  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:31.163355  412953 type.go:168] "Request Body" body=""
	I1210 07:45:31.163426  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:31.163801  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:31.163859  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:31.663621  412953 type.go:168] "Request Body" body=""
	I1210 07:45:31.663699  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:31.664023  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:32.163649  412953 type.go:168] "Request Body" body=""
	I1210 07:45:32.163722  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:32.163979  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:32.662687  412953 type.go:168] "Request Body" body=""
	I1210 07:45:32.662761  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:32.663068  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:33.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:45:33.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:33.163206  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:33.662712  412953 type.go:168] "Request Body" body=""
	I1210 07:45:33.662785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:33.663080  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:33.663132  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:34.162759  412953 type.go:168] "Request Body" body=""
	I1210 07:45:34.162838  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:34.163153  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:34.662951  412953 type.go:168] "Request Body" body=""
	I1210 07:45:34.663056  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:34.663400  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:35.162743  412953 type.go:168] "Request Body" body=""
	I1210 07:45:35.162820  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:35.163221  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:35.663143  412953 type.go:168] "Request Body" body=""
	I1210 07:45:35.663217  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:35.663541  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:35.663594  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:36.163366  412953 type.go:168] "Request Body" body=""
	I1210 07:45:36.163455  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:36.163788  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:36.663603  412953 type.go:168] "Request Body" body=""
	I1210 07:45:36.663693  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:36.664111  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:37.162779  412953 type.go:168] "Request Body" body=""
	I1210 07:45:37.162853  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:37.163211  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:37.662799  412953 type.go:168] "Request Body" body=""
	I1210 07:45:37.662884  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:37.663251  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:38.162931  412953 type.go:168] "Request Body" body=""
	I1210 07:45:38.163048  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:38.163386  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:38.163440  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:38.662746  412953 type.go:168] "Request Body" body=""
	I1210 07:45:38.662816  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:38.663178  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:39.162879  412953 type.go:168] "Request Body" body=""
	I1210 07:45:39.162950  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:39.163283  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:39.662744  412953 type.go:168] "Request Body" body=""
	I1210 07:45:39.662813  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:39.663099  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:40.162783  412953 type.go:168] "Request Body" body=""
	I1210 07:45:40.162864  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:40.163201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:40.663276  412953 type.go:168] "Request Body" body=""
	I1210 07:45:40.663359  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:40.663683  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:40.663745  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:41.163455  412953 type.go:168] "Request Body" body=""
	I1210 07:45:41.163527  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:41.163780  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:41.576476  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 07:45:41.640104  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:41.640160  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:45:41.640252  412953 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:45:41.663359  412953 type.go:168] "Request Body" body=""
	I1210 07:45:41.663436  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:41.663747  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:42.163603  412953 type.go:168] "Request Body" body=""
	I1210 07:45:42.163686  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:42.164024  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:42.663504  412953 type.go:168] "Request Body" body=""
	I1210 07:45:42.663576  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:42.663841  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:42.663882  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:43.163706  412953 type.go:168] "Request Body" body=""
	I1210 07:45:43.163778  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:43.164093  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:43.662795  412953 type.go:168] "Request Body" body=""
	I1210 07:45:43.662871  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:43.663216  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:44.163690  412953 type.go:168] "Request Body" body=""
	I1210 07:45:44.163770  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:44.164102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:44.662956  412953 type.go:168] "Request Body" body=""
	I1210 07:45:44.663053  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:44.663384  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:45.162824  412953 type.go:168] "Request Body" body=""
	I1210 07:45:45.162919  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:45.163386  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:45.163455  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:45.663325  412953 type.go:168] "Request Body" body=""
	I1210 07:45:45.663415  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:45.663778  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:46.163560  412953 type.go:168] "Request Body" body=""
	I1210 07:45:46.163643  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:46.164039  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:46.662756  412953 type.go:168] "Request Body" body=""
	I1210 07:45:46.662831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:46.663365  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:47.162707  412953 type.go:168] "Request Body" body=""
	I1210 07:45:47.162778  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:47.163063  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:47.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:45:47.662856  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:47.663225  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:47.663287  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:48.162991  412953 type.go:168] "Request Body" body=""
	I1210 07:45:48.163078  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:48.163366  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:48.662730  412953 type.go:168] "Request Body" body=""
	I1210 07:45:48.662809  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:48.663141  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:49.162823  412953 type.go:168] "Request Body" body=""
	I1210 07:45:49.162918  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:49.163244  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:49.662967  412953 type.go:168] "Request Body" body=""
	I1210 07:45:49.663054  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:49.663382  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:49.663448  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:50.162950  412953 type.go:168] "Request Body" body=""
	I1210 07:45:50.163048  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:50.163436  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:50.663184  412953 type.go:168] "Request Body" body=""
	I1210 07:45:50.663257  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:50.663570  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:51.163371  412953 type.go:168] "Request Body" body=""
	I1210 07:45:51.163453  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:51.163741  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:51.663498  412953 type.go:168] "Request Body" body=""
	I1210 07:45:51.663588  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:51.663902  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:51.663988  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:52.162782  412953 type.go:168] "Request Body" body=""
	I1210 07:45:52.162900  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:52.163274  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:52.663034  412953 type.go:168] "Request Body" body=""
	I1210 07:45:52.663107  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:52.663447  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:53.163126  412953 type.go:168] "Request Body" body=""
	I1210 07:45:53.163192  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:53.163480  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:53.663263  412953 type.go:168] "Request Body" body=""
	I1210 07:45:53.663337  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:53.663619  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:54.163395  412953 type.go:168] "Request Body" body=""
	I1210 07:45:54.163472  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:54.163802  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:54.163854  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:54.663613  412953 type.go:168] "Request Body" body=""
	I1210 07:45:54.663694  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:54.664023  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:55.163717  412953 type.go:168] "Request Body" body=""
	I1210 07:45:55.163799  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:55.164118  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:55.662998  412953 type.go:168] "Request Body" body=""
	I1210 07:45:55.663091  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:55.663450  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:56.163121  412953 type.go:168] "Request Body" body=""
	I1210 07:45:56.163190  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:56.163520  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:56.663288  412953 type.go:168] "Request Body" body=""
	I1210 07:45:56.663383  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:56.663710  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:56.663763  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:57.163429  412953 type.go:168] "Request Body" body=""
	I1210 07:45:57.163515  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:57.163853  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:57.663634  412953 type.go:168] "Request Body" body=""
	I1210 07:45:57.663715  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:57.664039  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:58.162743  412953 type.go:168] "Request Body" body=""
	I1210 07:45:58.162822  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:58.163189  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:58.662924  412953 type.go:168] "Request Body" body=""
	I1210 07:45:58.663036  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:58.663358  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:45:59.162715  412953 type.go:168] "Request Body" body=""
	I1210 07:45:59.162781  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:59.163074  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:45:59.163122  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:45:59.662789  412953 type.go:168] "Request Body" body=""
	I1210 07:45:59.662865  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:45:59.663251  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:00.162998  412953 type.go:168] "Request Body" body=""
	I1210 07:46:00.163123  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:00.163461  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:00.663540  412953 type.go:168] "Request Body" body=""
	I1210 07:46:00.663611  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:00.663865  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:01.163683  412953 type.go:168] "Request Body" body=""
	I1210 07:46:01.163757  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:01.164109  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:01.164167  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:01.662851  412953 type.go:168] "Request Body" body=""
	I1210 07:46:01.662929  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:01.663282  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:02.162953  412953 type.go:168] "Request Body" body=""
	I1210 07:46:02.163054  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:02.163311  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:02.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:46:02.662862  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:02.663241  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:03.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:46:03.162854  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:03.163220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:03.662701  412953 type.go:168] "Request Body" body=""
	I1210 07:46:03.662776  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:03.663100  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:03.663150  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:04.162777  412953 type.go:168] "Request Body" body=""
	I1210 07:46:04.162853  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:04.163234  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:04.663099  412953 type.go:168] "Request Body" body=""
	I1210 07:46:04.663184  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:04.663539  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:05.163328  412953 type.go:168] "Request Body" body=""
	I1210 07:46:05.163396  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:05.163668  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:05.663706  412953 type.go:168] "Request Body" body=""
	I1210 07:46:05.663785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:05.664109  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:05.664167  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:06.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:46:06.162864  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:06.163213  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:06.662683  412953 type.go:168] "Request Body" body=""
	I1210 07:46:06.662757  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:06.663110  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:07.162833  412953 type.go:168] "Request Body" body=""
	I1210 07:46:07.162915  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:07.163278  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:07.663001  412953 type.go:168] "Request Body" body=""
	I1210 07:46:07.663101  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:07.663452  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:08.163105  412953 type.go:168] "Request Body" body=""
	I1210 07:46:08.163173  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:08.163505  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:08.163551  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:08.663246  412953 type.go:168] "Request Body" body=""
	I1210 07:46:08.663355  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:08.663696  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:09.163360  412953 type.go:168] "Request Body" body=""
	I1210 07:46:09.163436  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:09.163764  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:09.663545  412953 type.go:168] "Request Body" body=""
	I1210 07:46:09.663613  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:09.663865  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:10.163582  412953 type.go:168] "Request Body" body=""
	I1210 07:46:10.163660  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:10.164166  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:10.164222  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:10.662978  412953 type.go:168] "Request Body" body=""
	I1210 07:46:10.663066  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:10.663429  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:11.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:46:11.162791  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:11.163072  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:11.662759  412953 type.go:168] "Request Body" body=""
	I1210 07:46:11.662839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:11.663211  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:12.162917  412953 type.go:168] "Request Body" body=""
	I1210 07:46:12.162995  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:12.163357  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:12.546948  412953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1210 07:46:12.609717  412953 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:46:12.609753  412953 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1210 07:46:12.609836  412953 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1210 07:46:12.614848  412953 out.go:179] * Enabled addons: 
	I1210 07:46:12.617540  412953 addons.go:530] duration metric: took 1m57.960111858s for enable addons: enabled=[]
	I1210 07:46:12.662919  412953 type.go:168] "Request Body" body=""
	I1210 07:46:12.663005  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:12.663288  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:12.663340  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:13.162812  412953 type.go:168] "Request Body" body=""
	I1210 07:46:13.162891  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:13.163241  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:13.662827  412953 type.go:168] "Request Body" body=""
	I1210 07:46:13.662909  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:13.663262  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:14.162721  412953 type.go:168] "Request Body" body=""
	I1210 07:46:14.162802  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:14.163126  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:14.663062  412953 type.go:168] "Request Body" body=""
	I1210 07:46:14.663130  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:14.663461  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:14.663518  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:15.163082  412953 type.go:168] "Request Body" body=""
	I1210 07:46:15.163169  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:15.163586  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:15.662688  412953 type.go:168] "Request Body" body=""
	I1210 07:46:15.662765  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:15.663046  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:16.162762  412953 type.go:168] "Request Body" body=""
	I1210 07:46:16.162843  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:16.163210  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:16.662984  412953 type.go:168] "Request Body" body=""
	I1210 07:46:16.663080  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:16.663419  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:17.162974  412953 type.go:168] "Request Body" body=""
	I1210 07:46:17.163061  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:17.163322  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:17.163365  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:17.663052  412953 type.go:168] "Request Body" body=""
	I1210 07:46:17.663135  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:17.663464  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:18.163305  412953 type.go:168] "Request Body" body=""
	I1210 07:46:18.163382  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:18.163729  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:18.663345  412953 type.go:168] "Request Body" body=""
	I1210 07:46:18.663418  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:18.663680  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:19.163480  412953 type.go:168] "Request Body" body=""
	I1210 07:46:19.163553  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:19.163943  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:19.163995  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:19.663628  412953 type.go:168] "Request Body" body=""
	I1210 07:46:19.663701  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:19.664020  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:20.162792  412953 type.go:168] "Request Body" body=""
	I1210 07:46:20.162868  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:20.163216  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:20.662994  412953 type.go:168] "Request Body" body=""
	I1210 07:46:20.663084  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:20.663424  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:21.162961  412953 type.go:168] "Request Body" body=""
	I1210 07:46:21.163061  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:21.163356  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:21.662751  412953 type.go:168] "Request Body" body=""
	I1210 07:46:21.662832  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:21.663141  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:21.663198  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:22.162889  412953 type.go:168] "Request Body" body=""
	I1210 07:46:22.162971  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:22.163363  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:22.663102  412953 type.go:168] "Request Body" body=""
	I1210 07:46:22.663178  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:22.663485  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:23.163205  412953 type.go:168] "Request Body" body=""
	I1210 07:46:23.163277  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:23.163529  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:23.663272  412953 type.go:168] "Request Body" body=""
	I1210 07:46:23.663341  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:23.663672  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:23.663725  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:24.163522  412953 type.go:168] "Request Body" body=""
	I1210 07:46:24.163596  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:24.163927  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:24.662937  412953 type.go:168] "Request Body" body=""
	I1210 07:46:24.663007  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:24.663348  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:25.162766  412953 type.go:168] "Request Body" body=""
	I1210 07:46:25.162842  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:25.163192  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:25.662762  412953 type.go:168] "Request Body" body=""
	I1210 07:46:25.662836  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:25.663200  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:26.162716  412953 type.go:168] "Request Body" body=""
	I1210 07:46:26.162788  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:26.163132  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:26.163196  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:26.662779  412953 type.go:168] "Request Body" body=""
	I1210 07:46:26.662852  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:26.663214  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:27.162966  412953 type.go:168] "Request Body" body=""
	I1210 07:46:27.163059  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:27.163400  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:27.663107  412953 type.go:168] "Request Body" body=""
	I1210 07:46:27.663178  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:27.663429  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:28.162791  412953 type.go:168] "Request Body" body=""
	I1210 07:46:28.162875  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:28.163237  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:28.163291  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:28.662824  412953 type.go:168] "Request Body" body=""
	I1210 07:46:28.662900  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:28.663270  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:29.162964  412953 type.go:168] "Request Body" body=""
	I1210 07:46:29.163094  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:29.163427  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:29.662778  412953 type.go:168] "Request Body" body=""
	I1210 07:46:29.662855  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:29.663206  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:30.162949  412953 type.go:168] "Request Body" body=""
	I1210 07:46:30.163043  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:30.163442  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:30.163519  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:30.663178  412953 type.go:168] "Request Body" body=""
	I1210 07:46:30.663251  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:30.663507  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:31.162779  412953 type.go:168] "Request Body" body=""
	I1210 07:46:31.162861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:31.163241  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:31.662779  412953 type.go:168] "Request Body" body=""
	I1210 07:46:31.662863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:31.663242  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:32.163585  412953 type.go:168] "Request Body" body=""
	I1210 07:46:32.163652  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:32.163904  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:32.163946  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:32.663710  412953 type.go:168] "Request Body" body=""
	I1210 07:46:32.663785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:32.664115  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:33.162721  412953 type.go:168] "Request Body" body=""
	I1210 07:46:33.162805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:33.163126  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:33.662701  412953 type.go:168] "Request Body" body=""
	I1210 07:46:33.662773  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:33.663075  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:34.162808  412953 type.go:168] "Request Body" body=""
	I1210 07:46:34.162883  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:34.163215  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:34.663030  412953 type.go:168] "Request Body" body=""
	I1210 07:46:34.663135  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:34.663479  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:34.663544  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:35.163296  412953 type.go:168] "Request Body" body=""
	I1210 07:46:35.163366  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:35.163632  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:35.663541  412953 type.go:168] "Request Body" body=""
	I1210 07:46:35.663615  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:35.663930  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:36.162688  412953 type.go:168] "Request Body" body=""
	I1210 07:46:36.162762  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:36.163076  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:36.662753  412953 type.go:168] "Request Body" body=""
	I1210 07:46:36.662826  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:36.663102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:37.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:46:37.162863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:37.163224  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:37.163284  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:37.662950  412953 type.go:168] "Request Body" body=""
	I1210 07:46:37.663050  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:37.663385  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:38.163090  412953 type.go:168] "Request Body" body=""
	I1210 07:46:38.163159  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:38.163458  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:38.662777  412953 type.go:168] "Request Body" body=""
	I1210 07:46:38.662848  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:38.663155  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:39.162756  412953 type.go:168] "Request Body" body=""
	I1210 07:46:39.162851  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:39.163199  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:39.662727  412953 type.go:168] "Request Body" body=""
	I1210 07:46:39.662805  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:39.663239  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:39.663299  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:40.162968  412953 type.go:168] "Request Body" body=""
	I1210 07:46:40.163066  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:40.163428  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:40.663175  412953 type.go:168] "Request Body" body=""
	I1210 07:46:40.663247  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:40.663570  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:41.163330  412953 type.go:168] "Request Body" body=""
	I1210 07:46:41.163398  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:41.163670  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:41.663442  412953 type.go:168] "Request Body" body=""
	I1210 07:46:41.663514  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:41.663828  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:41.663885  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:42.163612  412953 type.go:168] "Request Body" body=""
	I1210 07:46:42.163698  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:42.164038  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:42.662671  412953 type.go:168] "Request Body" body=""
	I1210 07:46:42.662751  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:42.663040  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:43.162772  412953 type.go:168] "Request Body" body=""
	I1210 07:46:43.162852  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:43.163223  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:43.662929  412953 type.go:168] "Request Body" body=""
	I1210 07:46:43.663035  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:43.663349  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:44.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:46:44.162784  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:44.163074  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:44.163124  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:44.663067  412953 type.go:168] "Request Body" body=""
	I1210 07:46:44.663140  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:44.663477  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:45.163353  412953 type.go:168] "Request Body" body=""
	I1210 07:46:45.163451  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:45.163846  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:45.662750  412953 type.go:168] "Request Body" body=""
	I1210 07:46:45.662825  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:45.663122  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:46.162781  412953 type.go:168] "Request Body" body=""
	I1210 07:46:46.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:46.163209  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:46.163281  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:46.662789  412953 type.go:168] "Request Body" body=""
	I1210 07:46:46.662869  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:46.663244  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:47.162966  412953 type.go:168] "Request Body" body=""
	I1210 07:46:47.163051  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:47.163320  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:47.662754  412953 type.go:168] "Request Body" body=""
	I1210 07:46:47.662828  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:47.663189  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:48.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:46:48.162860  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:48.163220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:48.662721  412953 type.go:168] "Request Body" body=""
	I1210 07:46:48.662793  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:48.663100  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:48.663152  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:49.162765  412953 type.go:168] "Request Body" body=""
	I1210 07:46:49.162862  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:49.163255  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:49.662773  412953 type.go:168] "Request Body" body=""
	I1210 07:46:49.662850  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:49.663184  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:50.162728  412953 type.go:168] "Request Body" body=""
	I1210 07:46:50.162800  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:50.163110  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:50.662840  412953 type.go:168] "Request Body" body=""
	I1210 07:46:50.662940  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:50.663293  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:50.663354  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:51.163080  412953 type.go:168] "Request Body" body=""
	I1210 07:46:51.163164  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:51.163485  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:51.663128  412953 type.go:168] "Request Body" body=""
	I1210 07:46:51.663202  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:51.663559  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:52.163395  412953 type.go:168] "Request Body" body=""
	I1210 07:46:52.163475  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:52.163785  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:52.663590  412953 type.go:168] "Request Body" body=""
	I1210 07:46:52.663677  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:52.664017  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:52.664072  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:53.162668  412953 type.go:168] "Request Body" body=""
	I1210 07:46:53.162748  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:53.163064  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:53.662759  412953 type.go:168] "Request Body" body=""
	I1210 07:46:53.662839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:53.663165  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:54.162852  412953 type.go:168] "Request Body" body=""
	I1210 07:46:54.162929  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:54.163260  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:54.663174  412953 type.go:168] "Request Body" body=""
	I1210 07:46:54.663244  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:54.663519  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:55.163370  412953 type.go:168] "Request Body" body=""
	I1210 07:46:55.163453  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:55.163790  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:55.163840  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:55.663591  412953 type.go:168] "Request Body" body=""
	I1210 07:46:55.663675  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:55.664032  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:56.162718  412953 type.go:168] "Request Body" body=""
	I1210 07:46:56.162787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:56.163127  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:56.662776  412953 type.go:168] "Request Body" body=""
	I1210 07:46:56.662857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:56.663223  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:57.162920  412953 type.go:168] "Request Body" body=""
	I1210 07:46:57.162997  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:57.163350  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:57.663044  412953 type.go:168] "Request Body" body=""
	I1210 07:46:57.663119  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:57.663437  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:46:57.663491  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:46:58.162764  412953 type.go:168] "Request Body" body=""
	I1210 07:46:58.162842  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:58.163207  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:58.662916  412953 type.go:168] "Request Body" body=""
	I1210 07:46:58.662998  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:58.663346  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:59.162707  412953 type.go:168] "Request Body" body=""
	I1210 07:46:59.162786  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:59.163086  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:46:59.662772  412953 type.go:168] "Request Body" body=""
	I1210 07:46:59.662861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:46:59.663202  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:00.162842  412953 type.go:168] "Request Body" body=""
	I1210 07:47:00.162937  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:00.163453  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:00.163526  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:00.663007  412953 type.go:168] "Request Body" body=""
	I1210 07:47:00.663098  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:00.663417  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:01.162776  412953 type.go:168] "Request Body" body=""
	I1210 07:47:01.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:01.163244  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:01.662972  412953 type.go:168] "Request Body" body=""
	I1210 07:47:01.663073  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:01.663435  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:02.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:47:02.162838  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:02.163124  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:02.662798  412953 type.go:168] "Request Body" body=""
	I1210 07:47:02.662880  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:02.663234  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:02.663292  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:03.162956  412953 type.go:168] "Request Body" body=""
	I1210 07:47:03.163056  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:03.163409  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:03.663107  412953 type.go:168] "Request Body" body=""
	I1210 07:47:03.663180  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:03.663591  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:04.163359  412953 type.go:168] "Request Body" body=""
	I1210 07:47:04.163439  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:04.163771  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:04.662805  412953 type.go:168] "Request Body" body=""
	I1210 07:47:04.662883  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:04.663237  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:05.162670  412953 type.go:168] "Request Body" body=""
	I1210 07:47:05.162740  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:05.163054  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:05.163104  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:05.662994  412953 type.go:168] "Request Body" body=""
	I1210 07:47:05.663093  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:05.663427  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:06.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:47:06.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:06.163239  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:06.662861  412953 type.go:168] "Request Body" body=""
	I1210 07:47:06.662927  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:06.663190  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:07.162855  412953 type.go:168] "Request Body" body=""
	I1210 07:47:07.162985  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:07.163313  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:07.163372  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:07.662758  412953 type.go:168] "Request Body" body=""
	I1210 07:47:07.662832  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:07.663175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:08.162737  412953 type.go:168] "Request Body" body=""
	I1210 07:47:08.162808  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:08.163134  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:08.662744  412953 type.go:168] "Request Body" body=""
	I1210 07:47:08.662821  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:08.663156  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:09.162866  412953 type.go:168] "Request Body" body=""
	I1210 07:47:09.162942  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:09.163277  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:09.662758  412953 type.go:168] "Request Body" body=""
	I1210 07:47:09.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:09.663155  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:09.663202  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:10.162922  412953 type.go:168] "Request Body" body=""
	I1210 07:47:10.163005  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:10.163368  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:10.662972  412953 type.go:168] "Request Body" body=""
	I1210 07:47:10.663067  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:10.663391  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:11.163075  412953 type.go:168] "Request Body" body=""
	I1210 07:47:11.163169  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:11.163529  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:11.663298  412953 type.go:168] "Request Body" body=""
	I1210 07:47:11.663381  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:11.663711  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:11.663770  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:12.163545  412953 type.go:168] "Request Body" body=""
	I1210 07:47:12.163626  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:12.163962  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:12.663589  412953 type.go:168] "Request Body" body=""
	I1210 07:47:12.663663  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:12.663928  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:13.163691  412953 type.go:168] "Request Body" body=""
	I1210 07:47:13.163763  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:13.164095  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:13.662728  412953 type.go:168] "Request Body" body=""
	I1210 07:47:13.662802  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:13.663152  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:14.162755  412953 type.go:168] "Request Body" body=""
	I1210 07:47:14.162827  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:14.163128  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:14.163174  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:14.663040  412953 type.go:168] "Request Body" body=""
	I1210 07:47:14.663122  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:14.663453  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:15.163172  412953 type.go:168] "Request Body" body=""
	I1210 07:47:15.163245  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:15.163583  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:15.663556  412953 type.go:168] "Request Body" body=""
	I1210 07:47:15.663626  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:15.663897  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:16.162681  412953 type.go:168] "Request Body" body=""
	I1210 07:47:16.162760  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:16.163086  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:16.662812  412953 type.go:168] "Request Body" body=""
	I1210 07:47:16.662888  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:16.663240  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:16.663299  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:17.162710  412953 type.go:168] "Request Body" body=""
	I1210 07:47:17.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:17.163065  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:17.662764  412953 type.go:168] "Request Body" body=""
	I1210 07:47:17.662843  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:17.663184  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:18.162904  412953 type.go:168] "Request Body" body=""
	I1210 07:47:18.162985  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:18.163341  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:18.662709  412953 type.go:168] "Request Body" body=""
	I1210 07:47:18.662779  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:18.663072  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:19.162752  412953 type.go:168] "Request Body" body=""
	I1210 07:47:19.162825  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:19.163160  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:19.163217  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:19.662756  412953 type.go:168] "Request Body" body=""
	I1210 07:47:19.662831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:19.663169  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:20.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:47:20.162785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:20.163102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:20.662785  412953 type.go:168] "Request Body" body=""
	I1210 07:47:20.662863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:20.663210  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:21.162946  412953 type.go:168] "Request Body" body=""
	I1210 07:47:21.163041  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:21.163356  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:21.163416  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:21.662705  412953 type.go:168] "Request Body" body=""
	I1210 07:47:21.662777  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:21.663055  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:22.162766  412953 type.go:168] "Request Body" body=""
	I1210 07:47:22.162839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:22.163199  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:22.662780  412953 type.go:168] "Request Body" body=""
	I1210 07:47:22.662854  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:22.663212  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:23.163429  412953 type.go:168] "Request Body" body=""
	I1210 07:47:23.163503  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:23.163746  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:23.163785  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:23.663518  412953 type.go:168] "Request Body" body=""
	I1210 07:47:23.663593  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:23.663930  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:24.163583  412953 type.go:168] "Request Body" body=""
	I1210 07:47:24.163661  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:24.164012  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:24.662895  412953 type.go:168] "Request Body" body=""
	I1210 07:47:24.662963  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:24.663238  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:25.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:47:25.162856  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:25.163189  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:25.662776  412953 type.go:168] "Request Body" body=""
	I1210 07:47:25.662854  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:25.663185  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:25.663235  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:26.162705  412953 type.go:168] "Request Body" body=""
	I1210 07:47:26.162780  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:26.163095  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:26.662818  412953 type.go:168] "Request Body" body=""
	I1210 07:47:26.662889  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:26.663264  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:27.162833  412953 type.go:168] "Request Body" body=""
	I1210 07:47:27.162913  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:27.163219  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:27.662705  412953 type.go:168] "Request Body" body=""
	I1210 07:47:27.662774  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:27.663040  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:28.162741  412953 type.go:168] "Request Body" body=""
	I1210 07:47:28.162822  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:28.163180  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:28.163235  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:28.662765  412953 type.go:168] "Request Body" body=""
	I1210 07:47:28.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:28.663191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:29.163595  412953 type.go:168] "Request Body" body=""
	I1210 07:47:29.163681  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:29.163938  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:29.663716  412953 type.go:168] "Request Body" body=""
	I1210 07:47:29.663791  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:29.664120  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:30.162802  412953 type.go:168] "Request Body" body=""
	I1210 07:47:30.162881  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:30.163228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:30.163277  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:30.662975  412953 type.go:168] "Request Body" body=""
	I1210 07:47:30.663060  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:30.663309  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:31.162750  412953 type.go:168] "Request Body" body=""
	I1210 07:47:31.162823  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:31.163180  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:31.662903  412953 type.go:168] "Request Body" body=""
	I1210 07:47:31.662983  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:31.663348  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:32.163059  412953 type.go:168] "Request Body" body=""
	I1210 07:47:32.163151  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:32.163486  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:32.163538  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:32.663271  412953 type.go:168] "Request Body" body=""
	I1210 07:47:32.663354  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:32.663709  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:33.163521  412953 type.go:168] "Request Body" body=""
	I1210 07:47:33.163606  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:33.163923  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:33.663552  412953 type.go:168] "Request Body" body=""
	I1210 07:47:33.663621  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:33.663890  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:34.162672  412953 type.go:168] "Request Body" body=""
	I1210 07:47:34.162756  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:34.163144  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:34.662973  412953 type.go:168] "Request Body" body=""
	I1210 07:47:34.663067  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:34.663388  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:34.663446  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:35.163091  412953 type.go:168] "Request Body" body=""
	I1210 07:47:35.163163  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:35.163426  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:35.663452  412953 type.go:168] "Request Body" body=""
	I1210 07:47:35.663527  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:35.663843  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:36.163648  412953 type.go:168] "Request Body" body=""
	I1210 07:47:36.163726  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:36.164083  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:36.662824  412953 type.go:168] "Request Body" body=""
	I1210 07:47:36.662911  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:36.663202  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:37.162888  412953 type.go:168] "Request Body" body=""
	I1210 07:47:37.162961  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:37.163292  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:37.163351  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:37.663060  412953 type.go:168] "Request Body" body=""
	I1210 07:47:37.663139  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:37.663481  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:38.163223  412953 type.go:168] "Request Body" body=""
	I1210 07:47:38.163297  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:38.163629  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:38.663452  412953 type.go:168] "Request Body" body=""
	I1210 07:47:38.663533  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:38.663884  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:39.163604  412953 type.go:168] "Request Body" body=""
	I1210 07:47:39.163682  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:39.164033  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:39.164085  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:39.662709  412953 type.go:168] "Request Body" body=""
	I1210 07:47:39.662798  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:39.663102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:40.162773  412953 type.go:168] "Request Body" body=""
	I1210 07:47:40.162850  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:40.163206  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:40.663034  412953 type.go:168] "Request Body" body=""
	I1210 07:47:40.663108  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:40.663451  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:41.162787  412953 type.go:168] "Request Body" body=""
	I1210 07:47:41.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:41.163166  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:41.662816  412953 type.go:168] "Request Body" body=""
	I1210 07:47:41.662924  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:41.663355  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:41.663428  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:42.162833  412953 type.go:168] "Request Body" body=""
	I1210 07:47:42.162931  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:42.163383  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:42.662719  412953 type.go:168] "Request Body" body=""
	I1210 07:47:42.662787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:42.663059  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:43.162773  412953 type.go:168] "Request Body" body=""
	I1210 07:47:43.162848  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:43.163226  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:43.662792  412953 type.go:168] "Request Body" body=""
	I1210 07:47:43.662865  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:43.663241  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:44.162732  412953 type.go:168] "Request Body" body=""
	I1210 07:47:44.162801  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:44.163080  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:44.163121  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:44.663044  412953 type.go:168] "Request Body" body=""
	I1210 07:47:44.663116  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:44.663433  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:45.163203  412953 type.go:168] "Request Body" body=""
	I1210 07:47:45.163296  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:45.163726  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:45.662709  412953 type.go:168] "Request Body" body=""
	I1210 07:47:45.662794  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:45.663175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:46.162894  412953 type.go:168] "Request Body" body=""
	I1210 07:47:46.162972  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:46.163291  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:46.163350  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:46.662780  412953 type.go:168] "Request Body" body=""
	I1210 07:47:46.662853  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:46.663154  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:47.162673  412953 type.go:168] "Request Body" body=""
	I1210 07:47:47.162741  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:47.163000  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:47.662781  412953 type.go:168] "Request Body" body=""
	I1210 07:47:47.662882  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:47.663282  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:48.163055  412953 type.go:168] "Request Body" body=""
	I1210 07:47:48.163152  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:48.163450  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:48.163501  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:48.663148  412953 type.go:168] "Request Body" body=""
	I1210 07:47:48.663224  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:48.663521  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:49.163373  412953 type.go:168] "Request Body" body=""
	I1210 07:47:49.163457  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:49.163828  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:49.663672  412953 type.go:168] "Request Body" body=""
	I1210 07:47:49.663767  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:49.664111  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:50.163649  412953 type.go:168] "Request Body" body=""
	I1210 07:47:50.163722  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:50.163974  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:50.164024  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:50.662959  412953 type.go:168] "Request Body" body=""
	I1210 07:47:50.663053  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:50.663390  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:51.163109  412953 type.go:168] "Request Body" body=""
	I1210 07:47:51.163226  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:51.163573  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:51.663350  412953 type.go:168] "Request Body" body=""
	I1210 07:47:51.663424  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:51.663681  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:52.163449  412953 type.go:168] "Request Body" body=""
	I1210 07:47:52.163526  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:52.163814  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:52.663598  412953 type.go:168] "Request Body" body=""
	I1210 07:47:52.663677  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:52.664036  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:52.664093  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:53.162754  412953 type.go:168] "Request Body" body=""
	I1210 07:47:53.162833  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:53.163184  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:53.662766  412953 type.go:168] "Request Body" body=""
	I1210 07:47:53.662839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:53.663209  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:54.162947  412953 type.go:168] "Request Body" body=""
	I1210 07:47:54.163051  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:54.163402  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:54.663120  412953 type.go:168] "Request Body" body=""
	I1210 07:47:54.663194  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:54.663486  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:55.163337  412953 type.go:168] "Request Body" body=""
	I1210 07:47:55.163413  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:55.163797  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:55.163853  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:55.663646  412953 type.go:168] "Request Body" body=""
	I1210 07:47:55.663720  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:55.664034  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:56.162701  412953 type.go:168] "Request Body" body=""
	I1210 07:47:56.162774  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:56.163098  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:56.662775  412953 type.go:168] "Request Body" body=""
	I1210 07:47:56.662859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:56.663186  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:57.162786  412953 type.go:168] "Request Body" body=""
	I1210 07:47:57.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:57.163228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:57.663510  412953 type.go:168] "Request Body" body=""
	I1210 07:47:57.663581  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:57.663872  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:57.663929  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:47:58.163725  412953 type.go:168] "Request Body" body=""
	I1210 07:47:58.163813  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:58.164168  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:58.662866  412953 type.go:168] "Request Body" body=""
	I1210 07:47:58.662937  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:58.663291  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:59.163430  412953 type.go:168] "Request Body" body=""
	I1210 07:47:59.163502  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:59.163755  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:47:59.663519  412953 type.go:168] "Request Body" body=""
	I1210 07:47:59.663597  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:47:59.663931  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:47:59.663987  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:00.163719  412953 type.go:168] "Request Body" body=""
	I1210 07:48:00.163813  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:00.164225  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:00.663315  412953 type.go:168] "Request Body" body=""
	I1210 07:48:00.663399  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:00.663675  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:01.163489  412953 type.go:168] "Request Body" body=""
	I1210 07:48:01.163567  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:01.163893  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:01.663702  412953 type.go:168] "Request Body" body=""
	I1210 07:48:01.663781  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:01.664135  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:01.664198  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:02.162817  412953 type.go:168] "Request Body" body=""
	I1210 07:48:02.162886  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:02.163184  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:02.662884  412953 type.go:168] "Request Body" body=""
	I1210 07:48:02.662971  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:02.663338  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:03.162784  412953 type.go:168] "Request Body" body=""
	I1210 07:48:03.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:03.163213  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:03.662759  412953 type.go:168] "Request Body" body=""
	I1210 07:48:03.662836  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:03.663120  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:04.162778  412953 type.go:168] "Request Body" body=""
	I1210 07:48:04.162852  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:04.163191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:04.163249  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:04.663059  412953 type.go:168] "Request Body" body=""
	I1210 07:48:04.663141  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:04.663463  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:05.162715  412953 type.go:168] "Request Body" body=""
	I1210 07:48:05.162795  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:05.163131  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:05.663057  412953 type.go:168] "Request Body" body=""
	I1210 07:48:05.663143  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:05.663537  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:06.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:48:06.162864  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:06.163194  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:06.662713  412953 type.go:168] "Request Body" body=""
	I1210 07:48:06.662786  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:06.663074  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:06.663118  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:07.162740  412953 type.go:168] "Request Body" body=""
	I1210 07:48:07.162814  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:07.163183  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:07.662797  412953 type.go:168] "Request Body" body=""
	I1210 07:48:07.662880  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:07.663236  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:08.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:48:08.162871  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:08.163193  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:08.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:48:08.662859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:08.663191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:08.663248  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:09.162797  412953 type.go:168] "Request Body" body=""
	I1210 07:48:09.162876  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:09.163240  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:09.662795  412953 type.go:168] "Request Body" body=""
	I1210 07:48:09.662869  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:09.663133  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:10.162749  412953 type.go:168] "Request Body" body=""
	I1210 07:48:10.162821  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:10.163157  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:10.662788  412953 type.go:168] "Request Body" body=""
	I1210 07:48:10.662861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:10.663447  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:10.663517  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:11.163125  412953 type.go:168] "Request Body" body=""
	I1210 07:48:11.163197  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:11.163452  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:11.663284  412953 type.go:168] "Request Body" body=""
	I1210 07:48:11.663356  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:11.663683  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:12.163495  412953 type.go:168] "Request Body" body=""
	I1210 07:48:12.163578  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:12.163924  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:12.663702  412953 type.go:168] "Request Body" body=""
	I1210 07:48:12.663773  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:12.664091  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:12.664142  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:13.162781  412953 type.go:168] "Request Body" body=""
	I1210 07:48:13.162859  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:13.163174  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:13.662779  412953 type.go:168] "Request Body" body=""
	I1210 07:48:13.662857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:13.663203  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:14.162879  412953 type.go:168] "Request Body" body=""
	I1210 07:48:14.162951  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:14.163288  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:14.663062  412953 type.go:168] "Request Body" body=""
	I1210 07:48:14.663137  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:14.663477  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:15.163290  412953 type.go:168] "Request Body" body=""
	I1210 07:48:15.163371  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:15.163703  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:15.163759  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:15.662995  412953 type.go:168] "Request Body" body=""
	I1210 07:48:15.663090  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:15.663504  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:16.163293  412953 type.go:168] "Request Body" body=""
	I1210 07:48:16.163371  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:16.163704  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:16.663391  412953 type.go:168] "Request Body" body=""
	I1210 07:48:16.663468  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:16.663824  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:17.163573  412953 type.go:168] "Request Body" body=""
	I1210 07:48:17.163645  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:17.163900  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:17.163949  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:17.662654  412953 type.go:168] "Request Body" body=""
	I1210 07:48:17.662730  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:17.663090  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:18.162799  412953 type.go:168] "Request Body" body=""
	I1210 07:48:18.162871  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:18.163220  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:18.662707  412953 type.go:168] "Request Body" body=""
	I1210 07:48:18.662777  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:18.663073  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:19.162733  412953 type.go:168] "Request Body" body=""
	I1210 07:48:19.162808  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:19.163121  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:19.662805  412953 type.go:168] "Request Body" body=""
	I1210 07:48:19.662883  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:19.663227  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:19.663286  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:20.162738  412953 type.go:168] "Request Body" body=""
	I1210 07:48:20.162812  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:20.163160  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:20.662944  412953 type.go:168] "Request Body" body=""
	I1210 07:48:20.663033  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:20.663345  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:21.162798  412953 type.go:168] "Request Body" body=""
	I1210 07:48:21.162875  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:21.163231  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:21.662783  412953 type.go:168] "Request Body" body=""
	I1210 07:48:21.662865  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:21.663168  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:22.162769  412953 type.go:168] "Request Body" body=""
	I1210 07:48:22.162847  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:22.163187  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:22.163245  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:22.662786  412953 type.go:168] "Request Body" body=""
	I1210 07:48:22.662860  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:22.663219  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:23.162659  412953 type.go:168] "Request Body" body=""
	I1210 07:48:23.162735  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:23.163055  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:23.662748  412953 type.go:168] "Request Body" body=""
	I1210 07:48:23.662823  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:23.663195  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:24.162905  412953 type.go:168] "Request Body" body=""
	I1210 07:48:24.162979  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:24.163329  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:24.163388  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:24.662665  412953 type.go:168] "Request Body" body=""
	I1210 07:48:24.662746  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:24.662999  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:25.162780  412953 type.go:168] "Request Body" body=""
	I1210 07:48:25.162867  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:25.163229  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:25.662989  412953 type.go:168] "Request Body" body=""
	I1210 07:48:25.663082  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:25.663400  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:26.162934  412953 type.go:168] "Request Body" body=""
	I1210 07:48:26.163003  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:26.163282  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:26.662757  412953 type.go:168] "Request Body" body=""
	I1210 07:48:26.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:26.663211  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:26.663268  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:27.162792  412953 type.go:168] "Request Body" body=""
	I1210 07:48:27.162873  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:27.163238  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:27.662787  412953 type.go:168] "Request Body" body=""
	I1210 07:48:27.662857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:27.663143  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:28.162822  412953 type.go:168] "Request Body" body=""
	I1210 07:48:28.162896  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:28.163266  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:28.662851  412953 type.go:168] "Request Body" body=""
	I1210 07:48:28.662926  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:28.663274  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:28.663332  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:29.162716  412953 type.go:168] "Request Body" body=""
	I1210 07:48:29.162788  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:29.163115  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:29.662940  412953 type.go:168] "Request Body" body=""
	I1210 07:48:29.663035  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:29.663342  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:30.162806  412953 type.go:168] "Request Body" body=""
	I1210 07:48:30.162885  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:30.163215  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:30.663057  412953 type.go:168] "Request Body" body=""
	I1210 07:48:30.663131  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:30.663453  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:30.663521  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:31.162771  412953 type.go:168] "Request Body" body=""
	I1210 07:48:31.162848  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:31.163188  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:31.662737  412953 type.go:168] "Request Body" body=""
	I1210 07:48:31.662810  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:31.663134  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:32.162725  412953 type.go:168] "Request Body" body=""
	I1210 07:48:32.162795  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:32.163156  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:32.662781  412953 type.go:168] "Request Body" body=""
	I1210 07:48:32.662857  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:32.663205  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:33.162800  412953 type.go:168] "Request Body" body=""
	I1210 07:48:33.162878  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:33.163211  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:33.163260  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:33.662739  412953 type.go:168] "Request Body" body=""
	I1210 07:48:33.662809  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:33.663092  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:34.162756  412953 type.go:168] "Request Body" body=""
	I1210 07:48:34.162829  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:34.163174  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:34.663106  412953 type.go:168] "Request Body" body=""
	I1210 07:48:34.663176  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:34.663513  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:35.163270  412953 type.go:168] "Request Body" body=""
	I1210 07:48:35.163336  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:35.163605  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:35.163645  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:35.663599  412953 type.go:168] "Request Body" body=""
	I1210 07:48:35.663677  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:35.663980  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:36.162698  412953 type.go:168] "Request Body" body=""
	I1210 07:48:36.162778  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:36.163088  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:36.662771  412953 type.go:168] "Request Body" body=""
	I1210 07:48:36.662845  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:36.663185  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:37.162797  412953 type.go:168] "Request Body" body=""
	I1210 07:48:37.162873  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:37.163228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:37.662944  412953 type.go:168] "Request Body" body=""
	I1210 07:48:37.663046  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:37.663363  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:37.663421  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:38.162719  412953 type.go:168] "Request Body" body=""
	I1210 07:48:38.162806  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:38.163087  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:38.662798  412953 type.go:168] "Request Body" body=""
	I1210 07:48:38.662885  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:38.663268  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:39.162981  412953 type.go:168] "Request Body" body=""
	I1210 07:48:39.163079  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:39.163418  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:39.662892  412953 type.go:168] "Request Body" body=""
	I1210 07:48:39.662959  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:39.663228  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:40.162925  412953 type.go:168] "Request Body" body=""
	I1210 07:48:40.163032  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:40.163374  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:40.163433  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:40.663119  412953 type.go:168] "Request Body" body=""
	I1210 07:48:40.663199  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:40.663541  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:41.163268  412953 type.go:168] "Request Body" body=""
	I1210 07:48:41.163348  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:41.163613  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:41.663378  412953 type.go:168] "Request Body" body=""
	I1210 07:48:41.663451  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:41.663768  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:42.163629  412953 type.go:168] "Request Body" body=""
	I1210 07:48:42.163723  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:42.164102  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:42.164166  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:42.662641  412953 type.go:168] "Request Body" body=""
	I1210 07:48:42.662719  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:42.663050  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:43.162792  412953 type.go:168] "Request Body" body=""
	I1210 07:48:43.162873  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:43.163217  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:43.662974  412953 type.go:168] "Request Body" body=""
	I1210 07:48:43.663077  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:43.663400  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:44.162713  412953 type.go:168] "Request Body" body=""
	I1210 07:48:44.162788  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:44.163107  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:44.663097  412953 type.go:168] "Request Body" body=""
	I1210 07:48:44.663173  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:44.663538  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:44.663598  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:45.163413  412953 type.go:168] "Request Body" body=""
	I1210 07:48:45.163521  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:45.163972  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:45.662711  412953 type.go:168] "Request Body" body=""
	I1210 07:48:45.662787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:45.663074  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:46.162828  412953 type.go:168] "Request Body" body=""
	I1210 07:48:46.162907  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:46.163281  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:46.663004  412953 type.go:168] "Request Body" body=""
	I1210 07:48:46.663101  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:46.663416  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:47.162720  412953 type.go:168] "Request Body" body=""
	I1210 07:48:47.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:47.163096  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:47.163144  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:47.662695  412953 type.go:168] "Request Body" body=""
	I1210 07:48:47.662772  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:47.663201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:48.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:48:48.162861  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:48.163200  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:48.662706  412953 type.go:168] "Request Body" body=""
	I1210 07:48:48.662780  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:48.663092  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:49.162753  412953 type.go:168] "Request Body" body=""
	I1210 07:48:49.162836  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:49.163183  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:49.163230  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:49.662751  412953 type.go:168] "Request Body" body=""
	I1210 07:48:49.662831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:49.663185  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:50.162871  412953 type.go:168] "Request Body" body=""
	I1210 07:48:50.162945  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:50.163286  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:50.663056  412953 type.go:168] "Request Body" body=""
	I1210 07:48:50.663136  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:50.663481  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:51.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:48:51.162872  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:51.163240  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:51.163304  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:51.662756  412953 type.go:168] "Request Body" body=""
	I1210 07:48:51.662828  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:51.663133  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:52.162821  412953 type.go:168] "Request Body" body=""
	I1210 07:48:52.162901  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:52.163251  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:52.662959  412953 type.go:168] "Request Body" body=""
	I1210 07:48:52.663046  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:52.663381  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:53.163082  412953 type.go:168] "Request Body" body=""
	I1210 07:48:53.163172  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:53.163460  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:53.163528  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:53.663234  412953 type.go:168] "Request Body" body=""
	I1210 07:48:53.663307  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:53.663639  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:54.163275  412953 type.go:168] "Request Body" body=""
	I1210 07:48:54.163349  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:54.163662  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:54.663667  412953 type.go:168] "Request Body" body=""
	I1210 07:48:54.663741  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:54.663998  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:55.162712  412953 type.go:168] "Request Body" body=""
	I1210 07:48:55.162787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:55.163138  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:55.663042  412953 type.go:168] "Request Body" body=""
	I1210 07:48:55.663118  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:55.663430  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:55.663490  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:56.163120  412953 type.go:168] "Request Body" body=""
	I1210 07:48:56.163192  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:56.163444  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:56.663167  412953 type.go:168] "Request Body" body=""
	I1210 07:48:56.663247  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:56.663592  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:57.163414  412953 type.go:168] "Request Body" body=""
	I1210 07:48:57.163491  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:57.163814  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:57.663518  412953 type.go:168] "Request Body" body=""
	I1210 07:48:57.663593  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:57.663859  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:48:57.663899  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:48:58.163700  412953 type.go:168] "Request Body" body=""
	I1210 07:48:58.163775  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:58.164099  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:58.662798  412953 type.go:168] "Request Body" body=""
	I1210 07:48:58.662883  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:58.663263  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:59.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:48:59.162788  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:59.163088  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:48:59.662782  412953 type.go:168] "Request Body" body=""
	I1210 07:48:59.662866  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:48:59.663288  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:00.162837  412953 type.go:168] "Request Body" body=""
	I1210 07:49:00.162924  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:00.163302  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:00.163360  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:00.663222  412953 type.go:168] "Request Body" body=""
	I1210 07:49:00.663294  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:00.663553  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:01.163418  412953 type.go:168] "Request Body" body=""
	I1210 07:49:01.163505  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:01.163868  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:01.663688  412953 type.go:168] "Request Body" body=""
	I1210 07:49:01.663772  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:01.664129  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:02.162731  412953 type.go:168] "Request Body" body=""
	I1210 07:49:02.162806  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:02.163127  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:02.662770  412953 type.go:168] "Request Body" body=""
	I1210 07:49:02.662851  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:02.663217  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:02.663293  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:03.162959  412953 type.go:168] "Request Body" body=""
	I1210 07:49:03.163058  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:03.163386  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:03.663598  412953 type.go:168] "Request Body" body=""
	I1210 07:49:03.663674  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:03.663952  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:04.163712  412953 type.go:168] "Request Body" body=""
	I1210 07:49:04.163781  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:04.164105  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:04.663090  412953 type.go:168] "Request Body" body=""
	I1210 07:49:04.663166  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:04.663491  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:04.663550  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:05.163267  412953 type.go:168] "Request Body" body=""
	I1210 07:49:05.163335  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:05.163606  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:05.663590  412953 type.go:168] "Request Body" body=""
	I1210 07:49:05.663661  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:05.663988  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:06.162730  412953 type.go:168] "Request Body" body=""
	I1210 07:49:06.162809  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:06.163147  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:06.662702  412953 type.go:168] "Request Body" body=""
	I1210 07:49:06.662772  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:06.663090  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:07.162769  412953 type.go:168] "Request Body" body=""
	I1210 07:49:07.162842  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:07.163224  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:07.163284  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:07.662818  412953 type.go:168] "Request Body" body=""
	I1210 07:49:07.662894  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:07.663261  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:08.162785  412953 type.go:168] "Request Body" body=""
	I1210 07:49:08.162860  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:08.163175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:08.662740  412953 type.go:168] "Request Body" body=""
	I1210 07:49:08.662817  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:08.663150  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:09.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:49:09.162845  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:09.163192  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:09.662749  412953 type.go:168] "Request Body" body=""
	I1210 07:49:09.662818  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:09.663142  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:09.663197  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:10.162737  412953 type.go:168] "Request Body" body=""
	I1210 07:49:10.162814  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:10.163183  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:10.662970  412953 type.go:168] "Request Body" body=""
	I1210 07:49:10.663057  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:10.663346  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:11.162920  412953 type.go:168] "Request Body" body=""
	I1210 07:49:11.162997  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:11.163332  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:11.662882  412953 type.go:168] "Request Body" body=""
	I1210 07:49:11.662959  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:11.663292  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:11.663351  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:12.163033  412953 type.go:168] "Request Body" body=""
	I1210 07:49:12.163107  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:12.163439  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:12.663269  412953 type.go:168] "Request Body" body=""
	I1210 07:49:12.663343  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:12.663599  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:13.163380  412953 type.go:168] "Request Body" body=""
	I1210 07:49:13.163458  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:13.163773  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:13.663450  412953 type.go:168] "Request Body" body=""
	I1210 07:49:13.663530  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:13.663864  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:13.663928  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:14.163662  412953 type.go:168] "Request Body" body=""
	I1210 07:49:14.163735  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:14.163995  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:14.663028  412953 type.go:168] "Request Body" body=""
	I1210 07:49:14.663102  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:14.663407  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:15.162751  412953 type.go:168] "Request Body" body=""
	I1210 07:49:15.162828  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:15.163189  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:15.662925  412953 type.go:168] "Request Body" body=""
	I1210 07:49:15.662994  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:15.663281  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:16.162756  412953 type.go:168] "Request Body" body=""
	I1210 07:49:16.162841  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:16.163429  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:16.163485  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:16.663183  412953 type.go:168] "Request Body" body=""
	I1210 07:49:16.663269  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:16.663642  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:17.163441  412953 type.go:168] "Request Body" body=""
	I1210 07:49:17.163510  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:17.163771  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:17.663499  412953 type.go:168] "Request Body" body=""
	I1210 07:49:17.663586  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:17.663939  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:18.163578  412953 type.go:168] "Request Body" body=""
	I1210 07:49:18.163666  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:18.164004  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:18.164061  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:18.662710  412953 type.go:168] "Request Body" body=""
	I1210 07:49:18.662778  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:18.663116  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:19.162846  412953 type.go:168] "Request Body" body=""
	I1210 07:49:19.162918  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:19.163279  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:19.662981  412953 type.go:168] "Request Body" body=""
	I1210 07:49:19.663079  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:19.663449  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:20.163115  412953 type.go:168] "Request Body" body=""
	I1210 07:49:20.163185  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:20.163485  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:20.662991  412953 type.go:168] "Request Body" body=""
	I1210 07:49:20.663087  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:20.663401  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:20.663459  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:21.163107  412953 type.go:168] "Request Body" body=""
	I1210 07:49:21.163187  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:21.163503  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:21.663312  412953 type.go:168] "Request Body" body=""
	I1210 07:49:21.663390  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:21.663648  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:22.163403  412953 type.go:168] "Request Body" body=""
	I1210 07:49:22.163478  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:22.163806  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:22.663492  412953 type.go:168] "Request Body" body=""
	I1210 07:49:22.663572  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:22.663904  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:22.663956  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:23.163518  412953 type.go:168] "Request Body" body=""
	I1210 07:49:23.163585  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:23.163836  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:23.663583  412953 type.go:168] "Request Body" body=""
	I1210 07:49:23.663658  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:23.663963  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:24.162717  412953 type.go:168] "Request Body" body=""
	I1210 07:49:24.162795  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:24.163164  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:24.663159  412953 type.go:168] "Request Body" body=""
	I1210 07:49:24.663235  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:24.663577  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:25.163338  412953 type.go:168] "Request Body" body=""
	I1210 07:49:25.163416  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:25.163741  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:25.163799  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:25.663562  412953 type.go:168] "Request Body" body=""
	I1210 07:49:25.663632  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:25.663965  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:26.162670  412953 type.go:168] "Request Body" body=""
	I1210 07:49:26.162742  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:26.162992  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:26.662713  412953 type.go:168] "Request Body" body=""
	I1210 07:49:26.662787  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:26.663115  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:27.162844  412953 type.go:168] "Request Body" body=""
	I1210 07:49:27.162918  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:27.163264  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:27.662706  412953 type.go:168] "Request Body" body=""
	I1210 07:49:27.662777  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:27.663066  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:27.663116  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:28.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:49:28.162842  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:28.163197  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:28.662785  412953 type.go:168] "Request Body" body=""
	I1210 07:49:28.662870  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:28.663195  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:29.162663  412953 type.go:168] "Request Body" body=""
	I1210 07:49:29.162733  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:29.163065  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:29.662764  412953 type.go:168] "Request Body" body=""
	I1210 07:49:29.662837  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:29.663210  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:29.663268  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:30.162965  412953 type.go:168] "Request Body" body=""
	I1210 07:49:30.163064  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:30.163387  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:30.663348  412953 type.go:168] "Request Body" body=""
	I1210 07:49:30.663415  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:30.663681  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:31.163525  412953 type.go:168] "Request Body" body=""
	I1210 07:49:31.163602  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:31.163937  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:31.662666  412953 type.go:168] "Request Body" body=""
	I1210 07:49:31.662743  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:31.663114  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:32.162802  412953 type.go:168] "Request Body" body=""
	I1210 07:49:32.162874  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:32.163205  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:32.163272  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:32.662932  412953 type.go:168] "Request Body" body=""
	I1210 07:49:32.663029  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:32.663371  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:33.162797  412953 type.go:168] "Request Body" body=""
	I1210 07:49:33.162872  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:33.163215  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:33.662690  412953 type.go:168] "Request Body" body=""
	I1210 07:49:33.662765  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:33.663040  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:34.162731  412953 type.go:168] "Request Body" body=""
	I1210 07:49:34.162811  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:34.163174  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:34.663120  412953 type.go:168] "Request Body" body=""
	I1210 07:49:34.663195  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:34.663560  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:34.663615  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:35.163345  412953 type.go:168] "Request Body" body=""
	I1210 07:49:35.163419  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:35.163698  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:35.663698  412953 type.go:168] "Request Body" body=""
	I1210 07:49:35.663775  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:35.664098  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:36.162816  412953 type.go:168] "Request Body" body=""
	I1210 07:49:36.162893  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:36.163232  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:36.662691  412953 type.go:168] "Request Body" body=""
	I1210 07:49:36.662766  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:36.663041  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:37.162771  412953 type.go:168] "Request Body" body=""
	I1210 07:49:37.162853  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:37.163191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:37.163246  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:37.662782  412953 type.go:168] "Request Body" body=""
	I1210 07:49:37.662863  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:37.663181  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:38.162784  412953 type.go:168] "Request Body" body=""
	I1210 07:49:38.162866  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:38.163339  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:38.663026  412953 type.go:168] "Request Body" body=""
	I1210 07:49:38.663107  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:38.663399  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:39.162747  412953 type.go:168] "Request Body" body=""
	I1210 07:49:39.162828  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:39.163215  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:39.163277  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:39.662785  412953 type.go:168] "Request Body" body=""
	I1210 07:49:39.662860  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:39.663137  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:40.162787  412953 type.go:168] "Request Body" body=""
	I1210 07:49:40.162874  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:40.163221  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:40.663148  412953 type.go:168] "Request Body" body=""
	I1210 07:49:40.663236  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:40.663586  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:41.163335  412953 type.go:168] "Request Body" body=""
	I1210 07:49:41.163407  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:41.163738  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:41.163786  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:41.663533  412953 type.go:168] "Request Body" body=""
	I1210 07:49:41.663607  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:41.663906  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:42.163638  412953 type.go:168] "Request Body" body=""
	I1210 07:49:42.163723  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:42.164115  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:42.663519  412953 type.go:168] "Request Body" body=""
	I1210 07:49:42.663591  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:42.663894  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:43.163685  412953 type.go:168] "Request Body" body=""
	I1210 07:49:43.163760  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:43.164112  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:43.164166  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:43.662835  412953 type.go:168] "Request Body" body=""
	I1210 07:49:43.662918  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:43.663257  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:44.162716  412953 type.go:168] "Request Body" body=""
	I1210 07:49:44.162795  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:44.163078  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:44.663092  412953 type.go:168] "Request Body" body=""
	I1210 07:49:44.663175  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:44.663483  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:45.163326  412953 type.go:168] "Request Body" body=""
	I1210 07:49:45.163419  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:45.163779  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:45.662654  412953 type.go:168] "Request Body" body=""
	I1210 07:49:45.662729  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:45.663065  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:45.663117  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:46.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:49:46.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:46.163223  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:46.662784  412953 type.go:168] "Request Body" body=""
	I1210 07:49:46.662868  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:46.663236  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:47.162751  412953 type.go:168] "Request Body" body=""
	I1210 07:49:47.162831  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:47.163170  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:47.662766  412953 type.go:168] "Request Body" body=""
	I1210 07:49:47.662844  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:47.663198  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:47.663257  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:48.162790  412953 type.go:168] "Request Body" body=""
	I1210 07:49:48.162890  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:48.163263  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:48.662709  412953 type.go:168] "Request Body" body=""
	I1210 07:49:48.662776  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:48.663061  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:49.162802  412953 type.go:168] "Request Body" body=""
	I1210 07:49:49.162894  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:49.163340  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:49.662794  412953 type.go:168] "Request Body" body=""
	I1210 07:49:49.662875  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:49.663233  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:49.663295  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:50.163641  412953 type.go:168] "Request Body" body=""
	I1210 07:49:50.163727  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:50.163987  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:50.663073  412953 type.go:168] "Request Body" body=""
	I1210 07:49:50.663155  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:50.663511  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:51.163122  412953 type.go:168] "Request Body" body=""
	I1210 07:49:51.163206  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:51.163540  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:51.663260  412953 type.go:168] "Request Body" body=""
	I1210 07:49:51.663334  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:51.663585  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:51.663641  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:52.163471  412953 type.go:168] "Request Body" body=""
	I1210 07:49:52.163547  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:52.163896  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:52.663720  412953 type.go:168] "Request Body" body=""
	I1210 07:49:52.663796  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:52.664128  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:53.162655  412953 type.go:168] "Request Body" body=""
	I1210 07:49:53.162729  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:53.162984  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:53.662712  412953 type.go:168] "Request Body" body=""
	I1210 07:49:53.662785  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:53.663162  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:54.162732  412953 type.go:168] "Request Body" body=""
	I1210 07:49:54.162812  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:54.163175  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:54.163230  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:54.663007  412953 type.go:168] "Request Body" body=""
	I1210 07:49:54.663092  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:54.663348  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:55.162765  412953 type.go:168] "Request Body" body=""
	I1210 07:49:55.162839  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:55.163203  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:55.662783  412953 type.go:168] "Request Body" body=""
	I1210 07:49:55.662865  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:55.663208  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:56.162714  412953 type.go:168] "Request Body" body=""
	I1210 07:49:56.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:56.163134  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:56.662828  412953 type.go:168] "Request Body" body=""
	I1210 07:49:56.662903  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:56.663245  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:56.663302  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:57.162993  412953 type.go:168] "Request Body" body=""
	I1210 07:49:57.163092  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:57.163459  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:57.663133  412953 type.go:168] "Request Body" body=""
	I1210 07:49:57.663213  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:57.663561  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:58.163311  412953 type.go:168] "Request Body" body=""
	I1210 07:49:58.163387  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:58.163735  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:58.663518  412953 type.go:168] "Request Body" body=""
	I1210 07:49:58.663591  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:58.663925  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:49:58.663979  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:49:59.163560  412953 type.go:168] "Request Body" body=""
	I1210 07:49:59.163638  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:59.163958  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:49:59.662670  412953 type.go:168] "Request Body" body=""
	I1210 07:49:59.662749  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:49:59.663105  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:00.162806  412953 type.go:168] "Request Body" body=""
	I1210 07:50:00.162893  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:00.163244  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:00.663460  412953 type.go:168] "Request Body" body=""
	I1210 07:50:00.663540  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:00.664068  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:00.664196  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:01.162803  412953 type.go:168] "Request Body" body=""
	I1210 07:50:01.162896  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:01.163273  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:01.662786  412953 type.go:168] "Request Body" body=""
	I1210 07:50:01.662862  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:01.663242  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:02.163061  412953 type.go:168] "Request Body" body=""
	I1210 07:50:02.163133  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:02.163407  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:02.662773  412953 type.go:168] "Request Body" body=""
	I1210 07:50:02.662847  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:02.663177  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:03.162915  412953 type.go:168] "Request Body" body=""
	I1210 07:50:03.162990  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:03.163330  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:03.163387  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:03.662715  412953 type.go:168] "Request Body" body=""
	I1210 07:50:03.662782  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:03.663054  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:04.162790  412953 type.go:168] "Request Body" body=""
	I1210 07:50:04.162866  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:04.163214  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:04.663178  412953 type.go:168] "Request Body" body=""
	I1210 07:50:04.663273  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:04.663624  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:05.163424  412953 type.go:168] "Request Body" body=""
	I1210 07:50:05.163513  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:05.163807  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:05.163852  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:05.662744  412953 type.go:168] "Request Body" body=""
	I1210 07:50:05.662830  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:05.663225  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:06.162951  412953 type.go:168] "Request Body" body=""
	I1210 07:50:06.163056  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:06.163423  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:06.662712  412953 type.go:168] "Request Body" body=""
	I1210 07:50:06.662782  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:06.663083  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:07.162770  412953 type.go:168] "Request Body" body=""
	I1210 07:50:07.162850  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:07.163201  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:07.662893  412953 type.go:168] "Request Body" body=""
	I1210 07:50:07.662969  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:07.663309  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:07.663366  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:08.162691  412953 type.go:168] "Request Body" body=""
	I1210 07:50:08.162763  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:08.163063  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:08.662792  412953 type.go:168] "Request Body" body=""
	I1210 07:50:08.662868  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:08.663218  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:09.162919  412953 type.go:168] "Request Body" body=""
	I1210 07:50:09.163000  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:09.163347  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:09.662690  412953 type.go:168] "Request Body" body=""
	I1210 07:50:09.662770  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:09.663051  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:10.162778  412953 type.go:168] "Request Body" body=""
	I1210 07:50:10.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:10.163191  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:10.163249  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:10.662950  412953 type.go:168] "Request Body" body=""
	I1210 07:50:10.663039  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:10.663349  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:11.162723  412953 type.go:168] "Request Body" body=""
	I1210 07:50:11.162792  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:11.163072  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:11.662754  412953 type.go:168] "Request Body" body=""
	I1210 07:50:11.662830  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:11.663172  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:12.162775  412953 type.go:168] "Request Body" body=""
	I1210 07:50:12.162858  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:12.163253  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:12.163314  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:12.662721  412953 type.go:168] "Request Body" body=""
	I1210 07:50:12.662806  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:12.663135  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:13.162827  412953 type.go:168] "Request Body" body=""
	I1210 07:50:13.162909  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:13.163270  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:13.662849  412953 type.go:168] "Request Body" body=""
	I1210 07:50:13.662924  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:13.663237  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:14.162709  412953 type.go:168] "Request Body" body=""
	I1210 07:50:14.162777  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:14.163050  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:14.663072  412953 type.go:168] "Request Body" body=""
	I1210 07:50:14.663154  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:14.663480  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1210 07:50:14.663533  412953 node_ready.go:55] error getting node "functional-314220" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-314220": dial tcp 192.168.49.2:8441: connect: connection refused
	I1210 07:50:15.163298  412953 type.go:168] "Request Body" body=""
	I1210 07:50:15.163374  412953 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-314220" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1210 07:50:15.163686  412953 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1210 07:50:15.662909  412953 node_ready.go:38] duration metric: took 6m0.000357427s for node "functional-314220" to be "Ready" ...
	I1210 07:50:15.669570  412953 out.go:203] 
	W1210 07:50:15.672493  412953 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1210 07:50:15.672574  412953 out.go:285] * 
	W1210 07:50:15.674736  412953 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 07:50:15.677520  412953 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 07:50:24 functional-314220 crio[5354]: time="2025-12-10T07:50:24.426285434Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=a6201215-74dc-42a0-9933-6ddae6ad702a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:25 functional-314220 crio[5354]: time="2025-12-10T07:50:25.473515285Z" level=info msg="Checking image status: minikube-local-cache-test:functional-314220" id=87665135-6c53-435d-b3ba-9221cc68b4f9 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:25 functional-314220 crio[5354]: time="2025-12-10T07:50:25.473693732Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 10 07:50:25 functional-314220 crio[5354]: time="2025-12-10T07:50:25.473735365Z" level=info msg="Image minikube-local-cache-test:functional-314220 not found" id=87665135-6c53-435d-b3ba-9221cc68b4f9 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:25 functional-314220 crio[5354]: time="2025-12-10T07:50:25.473819354Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-314220 found" id=87665135-6c53-435d-b3ba-9221cc68b4f9 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:25 functional-314220 crio[5354]: time="2025-12-10T07:50:25.497396997Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-314220" id=dc94c4c8-4f43-4a3e-b913-9d29f560667a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:25 functional-314220 crio[5354]: time="2025-12-10T07:50:25.49757272Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-314220 not found" id=dc94c4c8-4f43-4a3e-b913-9d29f560667a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:25 functional-314220 crio[5354]: time="2025-12-10T07:50:25.497620277Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-314220 found" id=dc94c4c8-4f43-4a3e-b913-9d29f560667a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:25 functional-314220 crio[5354]: time="2025-12-10T07:50:25.525853929Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-314220" id=6393c6e3-c34a-4f60-9c57-ab1b0e07d2b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:25 functional-314220 crio[5354]: time="2025-12-10T07:50:25.525992114Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-314220 not found" id=6393c6e3-c34a-4f60-9c57-ab1b0e07d2b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:25 functional-314220 crio[5354]: time="2025-12-10T07:50:25.526030276Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-314220 found" id=6393c6e3-c34a-4f60-9c57-ab1b0e07d2b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:26 functional-314220 crio[5354]: time="2025-12-10T07:50:26.504883294Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=4e2c2d6b-5704-434f-b99b-c5114ea47b56 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:26 functional-314220 crio[5354]: time="2025-12-10T07:50:26.831911404Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=e4ba2987-c5e1-4ce5-913b-3e7c1134ef57 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:26 functional-314220 crio[5354]: time="2025-12-10T07:50:26.832058262Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=e4ba2987-c5e1-4ce5-913b-3e7c1134ef57 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:26 functional-314220 crio[5354]: time="2025-12-10T07:50:26.832092461Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=e4ba2987-c5e1-4ce5-913b-3e7c1134ef57 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:27 functional-314220 crio[5354]: time="2025-12-10T07:50:27.396335947Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=3f049bd6-942f-4e06-9100-5081f476e811 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:27 functional-314220 crio[5354]: time="2025-12-10T07:50:27.39646554Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=3f049bd6-942f-4e06-9100-5081f476e811 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:27 functional-314220 crio[5354]: time="2025-12-10T07:50:27.396500494Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=3f049bd6-942f-4e06-9100-5081f476e811 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:27 functional-314220 crio[5354]: time="2025-12-10T07:50:27.424937488Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=21b99a54-1637-4e98-8582-fbb664a3115a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:27 functional-314220 crio[5354]: time="2025-12-10T07:50:27.425063513Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=21b99a54-1637-4e98-8582-fbb664a3115a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:27 functional-314220 crio[5354]: time="2025-12-10T07:50:27.425096678Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=21b99a54-1637-4e98-8582-fbb664a3115a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:27 functional-314220 crio[5354]: time="2025-12-10T07:50:27.461845777Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=bd754ccd-6835-4d76-aba4-e016306dde1c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:27 functional-314220 crio[5354]: time="2025-12-10T07:50:27.461978275Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=bd754ccd-6835-4d76-aba4-e016306dde1c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:27 functional-314220 crio[5354]: time="2025-12-10T07:50:27.462012868Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=bd754ccd-6835-4d76-aba4-e016306dde1c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:50:27 functional-314220 crio[5354]: time="2025-12-10T07:50:27.994690319Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=1facc1de-b2a6-4eec-b200-1dc97b1f6f51 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:50:31.850478    9497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:50:31.851146    9497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:50:31.853020    9497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:50:31.853657    9497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:50:31.855359    9497 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	[Dec10 07:11] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:23] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:25] overlayfs: idmapped layers are currently not supported
	[  +0.065949] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 07:31] overlayfs: idmapped layers are currently not supported
	[Dec10 07:32] overlayfs: idmapped layers are currently not supported
	[Dec10 07:50] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 07:50:31 up  2:33,  0 user,  load average: 0.83, 0.40, 0.84
	Linux functional-314220 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 07:50:29 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:50:29 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1155.
	Dec 10 07:50:29 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:29 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:29 functional-314220 kubelet[9371]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:29 functional-314220 kubelet[9371]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:29 functional-314220 kubelet[9371]: E1210 07:50:29.978546    9371 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:50:29 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:50:29 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:50:30 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1156.
	Dec 10 07:50:30 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:30 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:30 functional-314220 kubelet[9392]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:30 functional-314220 kubelet[9392]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:30 functional-314220 kubelet[9392]: E1210 07:50:30.723308    9392 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:50:30 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:50:30 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 07:50:31 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1157.
	Dec 10 07:50:31 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:31 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 07:50:31 functional-314220 kubelet[9413]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:31 functional-314220 kubelet[9413]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 07:50:31 functional-314220 kubelet[9413]: E1210 07:50:31.475182    9413 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 07:50:31 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 07:50:31 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220: exit status 2 (319.580042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-314220" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (734.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-314220 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1210 07:52:27.800507  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:55:04.882519  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:56:27.947062  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:57:27.800716  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:00:04.883218  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:02:27.800518  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-314220 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m12.013757496s)

                                                
                                                
-- stdout --
	* [functional-314220] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-314220" primary control-plane node in "functional-314220" cluster
	* Pulling base image v0.0.48-1765319469-22089 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000214333s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00019224s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00019224s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-314220 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m12.015006594s for "functional-314220" cluster.
I1210 08:02:44.869656  378528 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-314220
helpers_test.go:244: (dbg) docker inspect functional-314220:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	        "Created": "2025-12-10T07:35:53.582567333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 407506,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:35:53.640615938Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hostname",
	        "HostsPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hosts",
	        "LogPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30-json.log",
	        "Name": "/functional-314220",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-314220:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-314220",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	                "LowerDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35-init/diff:/var/lib/docker/overlay2/888a54fd4518421bb2e30bc2f0825f232bffa2f260991e8ba288270662a6554b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-314220",
	                "Source": "/var/lib/docker/volumes/functional-314220/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-314220",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-314220",
	                "name.minikube.sigs.k8s.io": "functional-314220",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a7384df8731cf8cd48ab1dfb3eb79fa2c44cd426e6255cad7345dab2e19d0bfd",
	            "SandboxKey": "/var/run/docker/netns/a7384df8731c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-314220": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:6b:8c:59:cb:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "854df014d6f98ea92f9938b53e3af35a6df7a455941ebb6e97efa575a4d35230",
	                    "EndpointID": "285d153b9d616bc3d729ab764c8a3d3c5b28bb20787fa60a80cbf50869c0c24b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-314220",
	                        "82cb4926192d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220: exit status 2 (443.573604ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-446865 image ls --format yaml --alsologtostderr                                                                                        │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ ssh     │ functional-446865 ssh pgrep buildkitd                                                                                                             │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ image   │ functional-446865 image build -t localhost/my-image:functional-446865 testdata/build --alsologtostderr                                            │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image   │ functional-446865 image ls --format json --alsologtostderr                                                                                        │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image   │ functional-446865 image ls                                                                                                                        │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image   │ functional-446865 image ls --format table --alsologtostderr                                                                                       │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ delete  │ -p functional-446865                                                                                                                              │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ start   │ -p functional-314220 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ start   │ -p functional-314220 --alsologtostderr -v=8                                                                                                       │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:44 UTC │                     │
	│ cache   │ functional-314220 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ functional-314220 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ functional-314220 cache add registry.k8s.io/pause:latest                                                                                          │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ functional-314220 cache add minikube-local-cache-test:functional-314220                                                                           │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ functional-314220 cache delete minikube-local-cache-test:functional-314220                                                                        │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ ssh     │ functional-314220 ssh sudo crictl images                                                                                                          │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ ssh     │ functional-314220 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ ssh     │ functional-314220 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │                     │
	│ cache   │ functional-314220 cache reload                                                                                                                    │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ ssh     │ functional-314220 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ kubectl │ functional-314220 kubectl -- --context functional-314220 get pods                                                                                 │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │                     │
	│ start   │ -p functional-314220 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:50:32
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:50:32.899349  418823 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:50:32.899467  418823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:50:32.899470  418823 out.go:374] Setting ErrFile to fd 2...
	I1210 07:50:32.899475  418823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:50:32.899728  418823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:50:32.900077  418823 out.go:368] Setting JSON to false
	I1210 07:50:32.900875  418823 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9183,"bootTime":1765343850,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:50:32.900927  418823 start.go:143] virtualization:  
	I1210 07:50:32.904391  418823 out.go:179] * [functional-314220] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:50:32.909970  418823 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:50:32.910062  418823 notify.go:221] Checking for updates...
	I1210 07:50:32.913755  418823 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:50:32.917032  418823 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:50:32.919882  418823 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 07:50:32.922630  418823 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:50:32.926514  418823 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:50:32.929831  418823 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:50:32.929952  418823 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:50:32.973254  418823 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:50:32.973375  418823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:50:33.030281  418823 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 07:50:33.020639734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:50:33.030378  418823 docker.go:319] overlay module found
	I1210 07:50:33.033510  418823 out.go:179] * Using the docker driver based on existing profile
	I1210 07:50:33.036367  418823 start.go:309] selected driver: docker
	I1210 07:50:33.036393  418823 start.go:927] validating driver "docker" against &{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:50:33.036475  418823 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:50:33.036573  418823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:50:33.101667  418823 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 07:50:33.09179395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:50:33.102098  418823 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:50:33.102120  418823 cni.go:84] Creating CNI manager for ""
	I1210 07:50:33.102171  418823 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:50:33.102212  418823 start.go:353] cluster config:
	{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:50:33.107143  418823 out.go:179] * Starting "functional-314220" primary control-plane node in "functional-314220" cluster
	I1210 07:50:33.110125  418823 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 07:50:33.113004  418823 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:50:33.115816  418823 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:50:33.115854  418823 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1210 07:50:33.115862  418823 cache.go:65] Caching tarball of preloaded images
	I1210 07:50:33.115956  418823 preload.go:238] Found /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1210 07:50:33.115966  418823 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 07:50:33.115961  418823 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:50:33.116084  418823 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/config.json ...
	I1210 07:50:33.135517  418823 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:50:33.135528  418823 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:50:33.135548  418823 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:50:33.135579  418823 start.go:360] acquireMachinesLock for functional-314220: {Name:mk24b69a1578b6f8eb2fecd1b5e1ddb99787d8b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:50:33.135644  418823 start.go:364] duration metric: took 47.935µs to acquireMachinesLock for "functional-314220"
	I1210 07:50:33.135662  418823 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:50:33.135667  418823 fix.go:54] fixHost starting: 
	I1210 07:50:33.135928  418823 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:50:33.153142  418823 fix.go:112] recreateIfNeeded on functional-314220: state=Running err=<nil>
	W1210 07:50:33.153176  418823 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:50:33.156510  418823 out.go:252] * Updating the running docker "functional-314220" container ...
	I1210 07:50:33.156542  418823 machine.go:94] provisionDockerMachine start ...
	I1210 07:50:33.156629  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:33.173363  418823 main.go:143] libmachine: Using SSH client type: native
	I1210 07:50:33.173679  418823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:50:33.173685  418823 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:50:33.306701  418823 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-314220
	
	I1210 07:50:33.306715  418823 ubuntu.go:182] provisioning hostname "functional-314220"
	I1210 07:50:33.306784  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:33.323402  418823 main.go:143] libmachine: Using SSH client type: native
	I1210 07:50:33.323703  418823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:50:33.323711  418823 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-314220 && echo "functional-314220" | sudo tee /etc/hostname
	I1210 07:50:33.463802  418823 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-314220
	
	I1210 07:50:33.463873  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:33.481663  418823 main.go:143] libmachine: Using SSH client type: native
	I1210 07:50:33.481979  418823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:50:33.481993  418823 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-314220' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-314220/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-314220' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:50:33.615371  418823 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:50:33.615387  418823 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-376671/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-376671/.minikube}
	I1210 07:50:33.615415  418823 ubuntu.go:190] setting up certificates
	I1210 07:50:33.615424  418823 provision.go:84] configureAuth start
	I1210 07:50:33.615481  418823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:50:33.633344  418823 provision.go:143] copyHostCerts
	I1210 07:50:33.633409  418823 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem, removing ...
	I1210 07:50:33.633416  418823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem
	I1210 07:50:33.633490  418823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem (1082 bytes)
	I1210 07:50:33.633597  418823 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem, removing ...
	I1210 07:50:33.633601  418823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem
	I1210 07:50:33.633627  418823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem (1123 bytes)
	I1210 07:50:33.633685  418823 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem, removing ...
	I1210 07:50:33.633688  418823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem
	I1210 07:50:33.633710  418823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem (1675 bytes)
	I1210 07:50:33.633815  418823 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem org=jenkins.functional-314220 san=[127.0.0.1 192.168.49.2 functional-314220 localhost minikube]
	I1210 07:50:33.839628  418823 provision.go:177] copyRemoteCerts
	I1210 07:50:33.839683  418823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:50:33.839721  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:33.857491  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:33.954662  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:50:33.972200  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:50:33.989946  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:50:34.010600  418823 provision.go:87] duration metric: took 395.152109ms to configureAuth
	I1210 07:50:34.010620  418823 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:50:34.010837  418823 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:50:34.010945  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.031319  418823 main.go:143] libmachine: Using SSH client type: native
	I1210 07:50:34.031635  418823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:50:34.031646  418823 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 07:50:34.394456  418823 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 07:50:34.394468  418823 machine.go:97] duration metric: took 1.237919377s to provisionDockerMachine
	I1210 07:50:34.394480  418823 start.go:293] postStartSetup for "functional-314220" (driver="docker")
	I1210 07:50:34.394492  418823 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:50:34.394553  418823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:50:34.394594  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.425725  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:34.527110  418823 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:50:34.530555  418823 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:50:34.530572  418823 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:50:34.530582  418823 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/addons for local assets ...
	I1210 07:50:34.530636  418823 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/files for local assets ...
	I1210 07:50:34.530720  418823 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> 3785282.pem in /etc/ssl/certs
	I1210 07:50:34.530798  418823 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts -> hosts in /etc/test/nested/copy/378528
	I1210 07:50:34.530841  418823 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/378528
	I1210 07:50:34.538245  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 07:50:34.555946  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts --> /etc/test/nested/copy/378528/hosts (40 bytes)
	I1210 07:50:34.573402  418823 start.go:296] duration metric: took 178.908422ms for postStartSetup
	I1210 07:50:34.573478  418823 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:50:34.573515  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.591144  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:34.684092  418823 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:50:34.688828  418823 fix.go:56] duration metric: took 1.553153828s for fixHost
	I1210 07:50:34.688843  418823 start.go:83] releasing machines lock for "functional-314220", held for 1.553192081s
	I1210 07:50:34.688922  418823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:50:34.705960  418823 ssh_runner.go:195] Run: cat /version.json
	I1210 07:50:34.705982  418823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:50:34.706002  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.706033  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.724227  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:34.734363  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:34.905519  418823 ssh_runner.go:195] Run: systemctl --version
	I1210 07:50:34.911896  418823 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 07:50:34.947949  418823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:50:34.952265  418823 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:50:34.952348  418823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:50:34.960087  418823 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:50:34.960100  418823 start.go:496] detecting cgroup driver to use...
	I1210 07:50:34.960131  418823 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:50:34.960194  418823 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:50:34.975734  418823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:50:34.988235  418823 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:50:34.988306  418823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:50:35.008024  418823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:50:35.023507  418823 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:50:35.140776  418823 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:50:35.287143  418823 docker.go:234] disabling docker service ...
	I1210 07:50:35.287205  418823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:50:35.302191  418823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:50:35.316045  418823 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:50:35.435977  418823 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:50:35.558581  418823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:50:35.570905  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:50:35.584271  418823 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 07:50:35.584341  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.593128  418823 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 07:50:35.593191  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.602242  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.611204  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.619936  418823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:50:35.627869  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.636843  418823 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.645059  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.653527  418823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:50:35.660914  418823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:50:35.668098  418823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:50:35.785150  418823 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 07:50:35.938526  418823 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 07:50:35.938594  418823 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 07:50:35.943564  418823 start.go:564] Will wait 60s for crictl version
	I1210 07:50:35.943634  418823 ssh_runner.go:195] Run: which crictl
	I1210 07:50:35.950126  418823 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:50:35.976476  418823 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 07:50:35.976565  418823 ssh_runner.go:195] Run: crio --version
	I1210 07:50:36.013250  418823 ssh_runner.go:195] Run: crio --version
	I1210 07:50:36.049514  418823 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 07:50:36.052392  418823 cli_runner.go:164] Run: docker network inspect functional-314220 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:50:36.073467  418823 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 07:50:36.080871  418823 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1210 07:50:36.083861  418823 kubeadm.go:884] updating cluster {Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:50:36.084003  418823 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:50:36.084083  418823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:50:36.122033  418823 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:50:36.122045  418823 crio.go:433] Images already preloaded, skipping extraction
	I1210 07:50:36.122104  418823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:50:36.147981  418823 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:50:36.147994  418823 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:50:36.148000  418823 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1210 07:50:36.148093  418823 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-314220 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:50:36.148179  418823 ssh_runner.go:195] Run: crio config
	I1210 07:50:36.223557  418823 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1210 07:50:36.223582  418823 cni.go:84] Creating CNI manager for ""
	I1210 07:50:36.223591  418823 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:50:36.223605  418823 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:50:36.223627  418823 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-314220 NodeName:functional-314220 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:50:36.223742  418823 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-314220"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:50:36.223809  418823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:50:36.231667  418823 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:50:36.231750  418823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:50:36.239592  418823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 07:50:36.252574  418823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:50:36.265349  418823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1210 07:50:36.278251  418823 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:50:36.281864  418823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:50:36.395980  418823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:50:36.662807  418823 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220 for IP: 192.168.49.2
	I1210 07:50:36.662818  418823 certs.go:195] generating shared ca certs ...
	I1210 07:50:36.662833  418823 certs.go:227] acquiring lock for ca certs: {Name:mk8f0c12407c084bd20b81428ac0dabca9a8cbd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:50:36.662974  418823 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key
	I1210 07:50:36.663036  418823 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key
	I1210 07:50:36.663044  418823 certs.go:257] generating profile certs ...
	I1210 07:50:36.663128  418823 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key
	I1210 07:50:36.663184  418823 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key.8ae59347
	I1210 07:50:36.663221  418823 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key
	I1210 07:50:36.663326  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem (1338 bytes)
	W1210 07:50:36.663359  418823 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528_empty.pem, impossibly tiny 0 bytes
	I1210 07:50:36.663370  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:50:36.663396  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:50:36.663419  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:50:36.663444  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem (1675 bytes)
	I1210 07:50:36.663487  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 07:50:36.664085  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:50:36.684901  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 07:50:36.704871  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:50:36.724001  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:50:36.742252  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:50:36.759395  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:50:36.776213  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:50:36.793265  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:50:36.810512  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /usr/share/ca-certificates/3785282.pem (1708 bytes)
	I1210 07:50:36.828353  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:50:36.845515  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem --> /usr/share/ca-certificates/378528.pem (1338 bytes)
	I1210 07:50:36.862765  418823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:50:36.875122  418823 ssh_runner.go:195] Run: openssl version
	I1210 07:50:36.881447  418823 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3785282.pem
	I1210 07:50:36.888818  418823 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3785282.pem /etc/ssl/certs/3785282.pem
	I1210 07:50:36.896054  418823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3785282.pem
	I1210 07:50:36.899817  418823 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 07:35 /usr/share/ca-certificates/3785282.pem
	I1210 07:50:36.899876  418823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3785282.pem
	I1210 07:50:36.940839  418823 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:50:36.948274  418823 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:50:36.955506  418823 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:50:36.963139  418823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:50:36.966818  418823 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 07:25 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:50:36.966873  418823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:50:37.008344  418823 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:50:37.018542  418823 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/378528.pem
	I1210 07:50:37.028848  418823 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/378528.pem /etc/ssl/certs/378528.pem
	I1210 07:50:37.037787  418823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378528.pem
	I1210 07:50:37.041789  418823 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 07:35 /usr/share/ca-certificates/378528.pem
	I1210 07:50:37.041883  418823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378528.pem
	I1210 07:50:37.083088  418823 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:50:37.090399  418823 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:50:37.093984  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:50:37.134711  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:50:37.175584  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:50:37.216322  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:50:37.258210  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:50:37.300727  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:50:37.343870  418823 kubeadm.go:401] StartCluster: {Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:50:37.343957  418823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:50:37.344031  418823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:50:37.373693  418823 cri.go:89] found id: ""
	I1210 07:50:37.373755  418823 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:50:37.382429  418823 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:50:37.382439  418823 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:50:37.382493  418823 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:50:37.389449  418823 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:50:37.389979  418823 kubeconfig.go:125] found "functional-314220" server: "https://192.168.49.2:8441"
	I1210 07:50:37.391548  418823 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:50:37.399103  418823 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 07:36:02.271715799 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 07:50:36.273283366 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1210 07:50:37.399128  418823 kubeadm.go:1161] stopping kube-system containers ...
	I1210 07:50:37.399140  418823 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 07:50:37.399196  418823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:50:37.434614  418823 cri.go:89] found id: ""
	I1210 07:50:37.434674  418823 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 07:50:37.455844  418823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:50:37.463706  418823 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 10 07:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 10 07:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 10 07:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 10 07:40 /etc/kubernetes/scheduler.conf
	
	I1210 07:50:37.463780  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 07:50:37.471472  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 07:50:37.478782  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:50:37.478837  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:50:37.486355  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 07:50:37.493976  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:50:37.494040  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:50:37.501640  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 07:50:37.509588  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:50:37.509645  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:50:37.517276  418823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:50:37.525049  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:37.571686  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:39.573879  418823 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.002165526s)
	I1210 07:50:39.573940  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:39.780126  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:39.857417  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:39.903067  418823 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:50:39.903139  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:40.403973  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:40.903804  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:41.403355  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:41.904207  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:42.404057  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:42.903818  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:43.404234  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:43.904036  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:44.403331  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:44.904250  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:45.404168  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:45.904093  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:46.404204  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:46.904144  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:47.404213  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:47.903250  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:48.404144  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:48.904262  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:49.404011  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:49.903321  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:50.403990  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:50.903998  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:51.403914  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:51.903990  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:52.403942  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:52.903796  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:53.403576  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:53.903966  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:54.403314  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:54.904147  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:55.404245  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:55.903953  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:56.403340  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:56.904274  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:57.404124  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:57.903801  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:58.404241  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:58.903869  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:59.403287  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:59.903954  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:00.403352  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:00.904043  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:01.403894  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:01.903648  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:02.404219  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:02.903678  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:03.403948  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:03.904224  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:04.403353  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:04.904036  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:05.404217  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:05.903272  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:06.404216  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:06.903390  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:07.403379  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:07.903804  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:08.404215  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:08.904228  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:09.404143  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:09.904284  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:10.403331  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:10.904097  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:11.404225  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:11.903848  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:12.403282  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:12.903360  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:13.403955  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:13.903329  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:14.404081  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:14.903215  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:15.403223  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:15.903728  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:16.403337  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:16.904035  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:17.403389  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:17.904062  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:18.403915  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:18.903844  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:19.403287  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:19.903456  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:20.403269  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:20.903919  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:21.403294  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:21.903959  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:22.403330  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:22.903425  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:23.403210  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:23.904289  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:24.403468  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:24.903578  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:25.403340  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:25.903276  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:26.404241  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:26.903945  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:27.404152  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:27.903337  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:28.404037  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:28.903401  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:29.403321  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:29.904015  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:30.403353  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:30.904231  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.403897  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.903428  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:32.404285  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:32.904059  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:33.403419  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:33.903340  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:34.404109  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:34.903323  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:35.404151  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:35.903331  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.403229  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.904295  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:37.403231  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:37.904159  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:38.403982  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:38.903898  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:39.403315  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:39.903344  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:39.903423  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:39.933715  418823 cri.go:89] found id: ""
	I1210 07:51:39.933730  418823 logs.go:282] 0 containers: []
	W1210 07:51:39.933737  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:39.933741  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:39.933807  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:39.959343  418823 cri.go:89] found id: ""
	I1210 07:51:39.959358  418823 logs.go:282] 0 containers: []
	W1210 07:51:39.959366  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:39.959371  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:39.959428  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:39.985280  418823 cri.go:89] found id: ""
	I1210 07:51:39.985294  418823 logs.go:282] 0 containers: []
	W1210 07:51:39.985302  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:39.985307  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:39.985366  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:40.021888  418823 cri.go:89] found id: ""
	I1210 07:51:40.021904  418823 logs.go:282] 0 containers: []
	W1210 07:51:40.021912  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:40.021917  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:40.022019  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:40.050222  418823 cri.go:89] found id: ""
	I1210 07:51:40.050238  418823 logs.go:282] 0 containers: []
	W1210 07:51:40.050245  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:40.050251  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:40.050314  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:40.076513  418823 cri.go:89] found id: ""
	I1210 07:51:40.076528  418823 logs.go:282] 0 containers: []
	W1210 07:51:40.076536  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:40.076541  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:40.076603  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:40.106190  418823 cri.go:89] found id: ""
	I1210 07:51:40.106206  418823 logs.go:282] 0 containers: []
	W1210 07:51:40.106213  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:40.106221  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:40.106232  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:40.171760  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:40.171781  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:40.188577  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:40.188594  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:40.259869  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:40.250864   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.251869   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.253556   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.254191   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.255824   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:40.250864   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.251869   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.253556   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.254191   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.255824   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:40.259893  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:40.259905  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:40.330751  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:40.330772  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:42.864666  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:42.875209  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:42.875278  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:42.906775  418823 cri.go:89] found id: ""
	I1210 07:51:42.906788  418823 logs.go:282] 0 containers: []
	W1210 07:51:42.906796  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:42.906802  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:42.906860  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:42.932120  418823 cri.go:89] found id: ""
	I1210 07:51:42.932134  418823 logs.go:282] 0 containers: []
	W1210 07:51:42.932142  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:42.932147  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:42.932207  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:42.960769  418823 cri.go:89] found id: ""
	I1210 07:51:42.960784  418823 logs.go:282] 0 containers: []
	W1210 07:51:42.960793  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:42.960798  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:42.960857  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:42.986269  418823 cri.go:89] found id: ""
	I1210 07:51:42.986285  418823 logs.go:282] 0 containers: []
	W1210 07:51:42.986294  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:42.986299  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:42.986361  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:43.021139  418823 cri.go:89] found id: ""
	I1210 07:51:43.021155  418823 logs.go:282] 0 containers: []
	W1210 07:51:43.021163  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:43.021168  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:43.021241  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:43.047486  418823 cri.go:89] found id: ""
	I1210 07:51:43.047501  418823 logs.go:282] 0 containers: []
	W1210 07:51:43.047508  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:43.047513  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:43.047576  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:43.073233  418823 cri.go:89] found id: ""
	I1210 07:51:43.073247  418823 logs.go:282] 0 containers: []
	W1210 07:51:43.073255  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:43.073263  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:43.073273  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:43.139078  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:43.139105  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:43.153579  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:43.153595  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:43.240938  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:43.232595   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.233028   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.234681   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.235233   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.236893   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:43.232595   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.233028   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.234681   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.235233   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.236893   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:43.240958  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:43.240970  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:43.308772  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:43.308794  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:45.841619  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:45.852276  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:45.852345  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:45.887199  418823 cri.go:89] found id: ""
	I1210 07:51:45.887215  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.887222  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:45.887237  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:45.887324  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:45.918859  418823 cri.go:89] found id: ""
	I1210 07:51:45.918873  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.918880  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:45.918885  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:45.918944  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:45.943991  418823 cri.go:89] found id: ""
	I1210 07:51:45.944006  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.944014  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:45.944019  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:45.944088  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:45.970351  418823 cri.go:89] found id: ""
	I1210 07:51:45.970371  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.970379  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:45.970384  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:45.970444  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:45.995587  418823 cri.go:89] found id: ""
	I1210 07:51:45.995601  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.995609  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:45.995614  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:45.995678  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:46.023570  418823 cri.go:89] found id: ""
	I1210 07:51:46.023586  418823 logs.go:282] 0 containers: []
	W1210 07:51:46.023593  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:46.023599  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:46.023660  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:46.056294  418823 cri.go:89] found id: ""
	I1210 07:51:46.056309  418823 logs.go:282] 0 containers: []
	W1210 07:51:46.056317  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:46.056325  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:46.056336  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:46.125021  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:46.125041  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:46.139709  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:46.139728  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:46.233096  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:46.221808   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.223333   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.227401   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.227730   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.229200   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:46.221808   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.223333   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.227401   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.227730   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.229200   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:46.233116  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:46.233127  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:46.302440  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:46.302460  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:48.833091  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:48.843740  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:48.843804  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:48.869041  418823 cri.go:89] found id: ""
	I1210 07:51:48.869057  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.869064  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:48.869070  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:48.869139  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:48.893750  418823 cri.go:89] found id: ""
	I1210 07:51:48.893765  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.893784  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:48.893790  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:48.893850  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:48.919315  418823 cri.go:89] found id: ""
	I1210 07:51:48.919330  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.919337  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:48.919343  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:48.919413  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:48.944091  418823 cri.go:89] found id: ""
	I1210 07:51:48.944107  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.944114  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:48.944120  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:48.944178  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:48.968980  418823 cri.go:89] found id: ""
	I1210 07:51:48.968995  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.969002  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:48.969007  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:48.969066  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:48.994258  418823 cri.go:89] found id: ""
	I1210 07:51:48.994272  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.994279  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:48.994294  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:48.994354  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:49.021988  418823 cri.go:89] found id: ""
	I1210 07:51:49.022004  418823 logs.go:282] 0 containers: []
	W1210 07:51:49.022012  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:49.022019  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:49.022029  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:49.089579  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:49.089605  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:49.118629  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:49.118648  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:49.191180  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:49.191204  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:49.208309  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:49.208325  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:49.273461  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:49.264556   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.265266   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.266981   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.267634   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.269070   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:49.264556   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.265266   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.266981   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.267634   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.269070   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:51.775168  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:51.785506  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:51.785567  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:51.810828  418823 cri.go:89] found id: ""
	I1210 07:51:51.810843  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.810860  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:51.810865  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:51.810926  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:51.835270  418823 cri.go:89] found id: ""
	I1210 07:51:51.835285  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.835292  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:51.835297  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:51.835357  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:51.862106  418823 cri.go:89] found id: ""
	I1210 07:51:51.862121  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.862129  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:51.862134  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:51.862203  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:51.887726  418823 cri.go:89] found id: ""
	I1210 07:51:51.887741  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.887749  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:51.887754  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:51.887816  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:51.916383  418823 cri.go:89] found id: ""
	I1210 07:51:51.916398  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.916405  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:51.916409  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:51.916479  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:51.945251  418823 cri.go:89] found id: ""
	I1210 07:51:51.945266  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.945273  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:51.945278  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:51.945337  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:51.970333  418823 cri.go:89] found id: ""
	I1210 07:51:51.970348  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.970357  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:51.970365  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:51.970385  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:51.998969  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:51.998986  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:52.071390  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:52.071420  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:52.087389  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:52.087406  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:52.154961  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:52.145998   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.146914   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.148561   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.149089   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.150792   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:52.145998   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.146914   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.148561   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.149089   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.150792   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:52.154973  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:52.154985  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:54.734714  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:54.745090  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:54.745151  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:54.770064  418823 cri.go:89] found id: ""
	I1210 07:51:54.770079  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.770086  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:54.770091  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:54.770149  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:54.796152  418823 cri.go:89] found id: ""
	I1210 07:51:54.796167  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.796174  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:54.796179  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:54.796241  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:54.822080  418823 cri.go:89] found id: ""
	I1210 07:51:54.822095  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.822102  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:54.822107  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:54.822175  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:54.849868  418823 cri.go:89] found id: ""
	I1210 07:51:54.849883  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.849891  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:54.849895  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:54.849951  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:54.875726  418823 cri.go:89] found id: ""
	I1210 07:51:54.875741  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.875748  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:54.875753  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:54.875815  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:54.905509  418823 cri.go:89] found id: ""
	I1210 07:51:54.905524  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.905531  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:54.905536  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:54.905595  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:54.931115  418823 cri.go:89] found id: ""
	I1210 07:51:54.931138  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.931146  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:54.931154  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:54.931164  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:54.997885  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:54.997906  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:55.030067  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:55.030094  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:55.099098  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:55.099116  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:55.113912  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:55.113934  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:55.200955  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:55.191641   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.192478   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.194192   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.195162   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.196812   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:55.191641   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.192478   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.194192   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.195162   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.196812   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:57.701770  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:57.712296  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:57.712359  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:57.742200  418823 cri.go:89] found id: ""
	I1210 07:51:57.742217  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.742225  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:57.742230  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:57.742288  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:57.770042  418823 cri.go:89] found id: ""
	I1210 07:51:57.770056  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.770063  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:57.770068  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:57.770126  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:57.795451  418823 cri.go:89] found id: ""
	I1210 07:51:57.795464  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.795471  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:57.795477  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:57.795536  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:57.823068  418823 cri.go:89] found id: ""
	I1210 07:51:57.823084  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.823091  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:57.823097  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:57.823160  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:57.849968  418823 cri.go:89] found id: ""
	I1210 07:51:57.849982  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.849998  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:57.850003  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:57.850064  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:57.877868  418823 cri.go:89] found id: ""
	I1210 07:51:57.877881  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.877889  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:57.877894  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:57.877954  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:57.903803  418823 cri.go:89] found id: ""
	I1210 07:51:57.903823  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.903830  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:57.903838  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:57.903849  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:57.970812  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:57.970831  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:57.985765  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:57.985786  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:58.070052  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:58.061598   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.062333   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.063943   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.064517   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.066053   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:58.061598   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.062333   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.063943   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.064517   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.066053   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:58.070062  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:58.070076  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:58.138971  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:58.138993  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:00.678904  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:00.689904  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:00.689965  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:00.717867  418823 cri.go:89] found id: ""
	I1210 07:52:00.717882  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.717889  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:00.717895  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:00.717960  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:00.746728  418823 cri.go:89] found id: ""
	I1210 07:52:00.746743  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.746750  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:00.746755  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:00.746815  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:00.771995  418823 cri.go:89] found id: ""
	I1210 07:52:00.772009  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.772016  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:00.772021  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:00.772084  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:00.801311  418823 cri.go:89] found id: ""
	I1210 07:52:00.801326  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.801333  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:00.801338  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:00.801400  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:00.827977  418823 cri.go:89] found id: ""
	I1210 07:52:00.827992  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.827999  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:00.828004  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:00.828064  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:00.857640  418823 cri.go:89] found id: ""
	I1210 07:52:00.857653  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.857661  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:00.857666  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:00.857723  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:00.886162  418823 cri.go:89] found id: ""
	I1210 07:52:00.886176  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.886183  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:00.886192  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:00.886203  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:00.900682  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:00.900699  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:00.962996  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:00.954850   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.955457   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.957002   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.957542   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.959068   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:00.954850   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.955457   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.957002   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.957542   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.959068   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:00.963006  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:00.963044  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:01.030923  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:01.030945  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:01.064661  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:01.064678  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:03.634114  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:03.644373  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:03.644437  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:03.670228  418823 cri.go:89] found id: ""
	I1210 07:52:03.670242  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.670250  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:03.670255  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:03.670313  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:03.697715  418823 cri.go:89] found id: ""
	I1210 07:52:03.697730  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.697737  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:03.697742  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:03.697800  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:03.725317  418823 cri.go:89] found id: ""
	I1210 07:52:03.725331  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.725338  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:03.725344  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:03.725406  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:03.754932  418823 cri.go:89] found id: ""
	I1210 07:52:03.754947  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.754954  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:03.754959  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:03.755055  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:03.781710  418823 cri.go:89] found id: ""
	I1210 07:52:03.781724  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.781731  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:03.781736  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:03.781799  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:03.806748  418823 cri.go:89] found id: ""
	I1210 07:52:03.806761  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.806769  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:03.806773  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:03.806839  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:03.831941  418823 cri.go:89] found id: ""
	I1210 07:52:03.831956  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.831963  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:03.831970  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:03.831980  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:03.893889  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:03.885770   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.886207   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.887763   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.888077   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.889552   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:03.885770   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.886207   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.887763   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.888077   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.889552   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:03.893899  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:03.893910  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:03.963740  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:03.963762  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:03.994617  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:03.994633  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:04.064848  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:04.064869  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:06.580763  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:06.590814  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:06.590876  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:06.617862  418823 cri.go:89] found id: ""
	I1210 07:52:06.617877  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.617884  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:06.617889  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:06.617952  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:06.642344  418823 cri.go:89] found id: ""
	I1210 07:52:06.642364  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.642372  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:06.642376  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:06.642434  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:06.668168  418823 cri.go:89] found id: ""
	I1210 07:52:06.668181  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.668189  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:06.668194  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:06.668252  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:06.693569  418823 cri.go:89] found id: ""
	I1210 07:52:06.693584  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.693591  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:06.693596  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:06.693655  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:06.719248  418823 cri.go:89] found id: ""
	I1210 07:52:06.719272  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.719281  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:06.719286  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:06.719353  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:06.744269  418823 cri.go:89] found id: ""
	I1210 07:52:06.744298  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.744306  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:06.744311  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:06.744384  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:06.769456  418823 cri.go:89] found id: ""
	I1210 07:52:06.769485  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.769493  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:06.769501  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:06.769520  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:06.835122  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:06.826477   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.827191   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.828919   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.829490   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.831268   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:06.826477   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.827191   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.828919   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.829490   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.831268   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:06.835134  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:06.835145  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:06.903874  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:06.903896  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:06.932245  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:06.932261  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:06.999686  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:06.999707  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:09.516631  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:09.527151  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:09.527214  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:09.553162  418823 cri.go:89] found id: ""
	I1210 07:52:09.553175  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.553182  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:09.553187  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:09.553248  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:09.577770  418823 cri.go:89] found id: ""
	I1210 07:52:09.577785  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.577792  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:09.577797  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:09.577857  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:09.603741  418823 cri.go:89] found id: ""
	I1210 07:52:09.603755  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.603765  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:09.603770  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:09.603830  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:09.631507  418823 cri.go:89] found id: ""
	I1210 07:52:09.631521  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.631529  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:09.631534  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:09.631597  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:09.657315  418823 cri.go:89] found id: ""
	I1210 07:52:09.657329  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.657342  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:09.657347  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:09.657406  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:09.682591  418823 cri.go:89] found id: ""
	I1210 07:52:09.682606  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.682613  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:09.682619  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:09.682677  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:09.708020  418823 cri.go:89] found id: ""
	I1210 07:52:09.708034  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.708042  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:09.708049  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:09.708062  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:09.777964  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:09.777985  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:09.792349  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:09.792367  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:09.854411  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:09.846045   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.846788   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.848379   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.848685   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.850192   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:09.846045   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.846788   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.848379   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.848685   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.850192   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:09.854421  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:09.854434  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:09.922233  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:09.922255  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:12.457145  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:12.468643  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:12.468721  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:12.494760  418823 cri.go:89] found id: ""
	I1210 07:52:12.494774  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.494782  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:12.494787  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:12.494853  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:12.520639  418823 cri.go:89] found id: ""
	I1210 07:52:12.520653  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.520673  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:12.520678  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:12.520738  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:12.546812  418823 cri.go:89] found id: ""
	I1210 07:52:12.546827  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.546834  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:12.546839  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:12.546899  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:12.573531  418823 cri.go:89] found id: ""
	I1210 07:52:12.573546  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.573553  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:12.573558  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:12.573623  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:12.600389  418823 cri.go:89] found id: ""
	I1210 07:52:12.600403  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.600411  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:12.600416  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:12.600475  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:12.630232  418823 cri.go:89] found id: ""
	I1210 07:52:12.630257  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.630265  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:12.630271  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:12.630340  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:12.656013  418823 cri.go:89] found id: ""
	I1210 07:52:12.656027  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.656035  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:12.656042  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:12.656058  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:12.727638  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:12.727667  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:12.742877  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:12.742895  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:12.807790  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:12.799348   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.800103   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.801856   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.802374   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.803909   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:12.799348   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.800103   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.801856   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.802374   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.803909   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:12.807802  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:12.807814  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:12.876103  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:12.876124  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:15.409499  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:15.424003  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:15.424080  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:15.458307  418823 cri.go:89] found id: ""
	I1210 07:52:15.458341  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.458348  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:15.458353  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:15.458428  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:15.488619  418823 cri.go:89] found id: ""
	I1210 07:52:15.488634  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.488641  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:15.488646  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:15.488709  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:15.513795  418823 cri.go:89] found id: ""
	I1210 07:52:15.513809  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.513817  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:15.513831  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:15.513888  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:15.539219  418823 cri.go:89] found id: ""
	I1210 07:52:15.539233  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.539240  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:15.539245  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:15.539305  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:15.565461  418823 cri.go:89] found id: ""
	I1210 07:52:15.565475  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.565490  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:15.565495  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:15.565554  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:15.597327  418823 cri.go:89] found id: ""
	I1210 07:52:15.597341  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.597348  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:15.597354  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:15.597412  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:15.622974  418823 cri.go:89] found id: ""
	I1210 07:52:15.622994  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.623001  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:15.623047  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:15.623059  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:15.690204  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:15.681087   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.681887   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.683667   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.684195   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.685805   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:15.681087   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.681887   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.683667   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.684195   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.685805   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:15.690215  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:15.690226  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:15.758230  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:15.758252  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:15.788867  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:15.788884  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:15.856134  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:15.856154  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:18.371925  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:18.382408  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:18.382482  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:18.408893  418823 cri.go:89] found id: ""
	I1210 07:52:18.408907  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.408914  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:18.408919  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:18.408994  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:18.444341  418823 cri.go:89] found id: ""
	I1210 07:52:18.444355  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.444374  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:18.444380  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:18.444450  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:18.476809  418823 cri.go:89] found id: ""
	I1210 07:52:18.476823  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.476830  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:18.476835  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:18.476892  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:18.503052  418823 cri.go:89] found id: ""
	I1210 07:52:18.503066  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.503073  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:18.503078  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:18.503150  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:18.529967  418823 cri.go:89] found id: ""
	I1210 07:52:18.529981  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.529998  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:18.530003  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:18.530095  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:18.555604  418823 cri.go:89] found id: ""
	I1210 07:52:18.555619  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.555626  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:18.555631  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:18.555692  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:18.580758  418823 cri.go:89] found id: ""
	I1210 07:52:18.580773  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.580781  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:18.580789  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:18.580803  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:18.649536  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:18.641471   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.642112   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.643861   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.644377   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.645509   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:18.641471   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.642112   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.643861   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.644377   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.645509   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:18.649546  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:18.649558  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:18.720152  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:18.720174  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:18.749804  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:18.749823  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:18.819943  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:18.819965  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:21.337138  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:21.347127  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:21.347189  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:21.373895  418823 cri.go:89] found id: ""
	I1210 07:52:21.373918  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.373926  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:21.373931  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:21.373998  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:21.399869  418823 cri.go:89] found id: ""
	I1210 07:52:21.399896  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.399903  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:21.399908  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:21.399979  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:21.427202  418823 cri.go:89] found id: ""
	I1210 07:52:21.427219  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.427226  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:21.427231  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:21.427299  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:21.458325  418823 cri.go:89] found id: ""
	I1210 07:52:21.458348  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.458355  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:21.458360  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:21.458429  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:21.488232  418823 cri.go:89] found id: ""
	I1210 07:52:21.488246  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.488253  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:21.488259  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:21.488318  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:21.523678  418823 cri.go:89] found id: ""
	I1210 07:52:21.523693  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.523700  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:21.523706  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:21.523774  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:21.554053  418823 cri.go:89] found id: ""
	I1210 07:52:21.554068  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.554076  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:21.554084  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:21.554094  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:21.584626  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:21.584643  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:21.650495  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:21.650516  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:21.665376  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:21.665393  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:21.728186  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:21.719729   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.720322   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.722158   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.722717   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.724385   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:21.719729   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.720322   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.722158   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.722717   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.724385   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:21.728197  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:21.728210  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:24.296826  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:24.306876  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:24.306941  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:24.331566  418823 cri.go:89] found id: ""
	I1210 07:52:24.331580  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.331587  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:24.331592  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:24.331654  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:24.364290  418823 cri.go:89] found id: ""
	I1210 07:52:24.364304  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.364312  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:24.364317  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:24.364375  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:24.394840  418823 cri.go:89] found id: ""
	I1210 07:52:24.394855  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.394863  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:24.394871  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:24.394927  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:24.423155  418823 cri.go:89] found id: ""
	I1210 07:52:24.423169  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.423176  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:24.423181  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:24.423237  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:24.448495  418823 cri.go:89] found id: ""
	I1210 07:52:24.448509  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.448517  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:24.448522  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:24.448582  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:24.473213  418823 cri.go:89] found id: ""
	I1210 07:52:24.473228  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.473244  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:24.473250  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:24.473311  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:24.498332  418823 cri.go:89] found id: ""
	I1210 07:52:24.498346  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.498363  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:24.498371  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:24.498386  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:24.512582  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:24.512599  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:24.576630  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:24.568982   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.569512   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.570996   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.571433   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.572843   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:24.568982   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.569512   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.570996   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.571433   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.572843   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:24.576640  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:24.576651  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:24.643309  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:24.643329  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:24.671954  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:24.671973  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:27.241302  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:27.251489  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:27.251554  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:27.276224  418823 cri.go:89] found id: ""
	I1210 07:52:27.276239  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.276247  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:27.276252  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:27.276315  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:27.302841  418823 cri.go:89] found id: ""
	I1210 07:52:27.302855  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.302862  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:27.302867  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:27.302934  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:27.329134  418823 cri.go:89] found id: ""
	I1210 07:52:27.329148  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.329155  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:27.329160  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:27.329217  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:27.355218  418823 cri.go:89] found id: ""
	I1210 07:52:27.355233  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.355240  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:27.355245  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:27.355310  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:27.380928  418823 cri.go:89] found id: ""
	I1210 07:52:27.380942  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.380948  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:27.380953  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:27.381016  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:27.405139  418823 cri.go:89] found id: ""
	I1210 07:52:27.405153  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.405160  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:27.405165  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:27.405224  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:27.434261  418823 cri.go:89] found id: ""
	I1210 07:52:27.434274  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.434281  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:27.434288  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:27.434308  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:27.512344  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:27.512364  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:27.526600  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:27.526616  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:27.593338  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:27.585550   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.586220   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.587805   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.588181   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.589629   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:27.585550   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.586220   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.587805   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.588181   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.589629   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:27.593348  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:27.593360  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:27.660306  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:27.660330  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:30.190245  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:30.200692  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:30.200762  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:30.225476  418823 cri.go:89] found id: ""
	I1210 07:52:30.225491  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.225498  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:30.225503  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:30.225561  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:30.252256  418823 cri.go:89] found id: ""
	I1210 07:52:30.252270  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.252277  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:30.252282  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:30.252339  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:30.277929  418823 cri.go:89] found id: ""
	I1210 07:52:30.277943  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.277950  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:30.277955  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:30.278013  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:30.303604  418823 cri.go:89] found id: ""
	I1210 07:52:30.303619  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.303627  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:30.303631  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:30.303695  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:30.328592  418823 cri.go:89] found id: ""
	I1210 07:52:30.328606  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.328620  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:30.328625  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:30.328683  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:30.357680  418823 cri.go:89] found id: ""
	I1210 07:52:30.357694  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.357701  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:30.357706  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:30.357772  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:30.383058  418823 cri.go:89] found id: ""
	I1210 07:52:30.383071  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.383085  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:30.383093  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:30.383103  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:30.451001  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:30.451264  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:30.466690  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:30.466709  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:30.535653  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:30.527209   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.528061   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.529830   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.530197   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.531734   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:30.527209   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.528061   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.529830   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.530197   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.531734   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:30.535662  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:30.535673  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:30.603957  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:30.603978  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:33.138030  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:33.148615  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:33.148680  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:33.174834  418823 cri.go:89] found id: ""
	I1210 07:52:33.174848  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.174855  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:33.174860  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:33.174922  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:33.205206  418823 cri.go:89] found id: ""
	I1210 07:52:33.205221  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.205228  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:33.205233  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:33.205296  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:33.235457  418823 cri.go:89] found id: ""
	I1210 07:52:33.235472  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.235480  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:33.235485  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:33.235548  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:33.260204  418823 cri.go:89] found id: ""
	I1210 07:52:33.260218  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.260225  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:33.260230  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:33.260290  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:33.285426  418823 cri.go:89] found id: ""
	I1210 07:52:33.285440  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.285448  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:33.285453  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:33.285513  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:33.310040  418823 cri.go:89] found id: ""
	I1210 07:52:33.310054  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.310068  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:33.310073  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:33.310135  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:33.334636  418823 cri.go:89] found id: ""
	I1210 07:52:33.334650  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.334658  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:33.334665  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:33.334676  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:33.400914  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:33.392977   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.393712   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.395357   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.395749   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.397284   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:33.392977   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.393712   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.395357   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.395749   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.397284   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:33.400923  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:33.400934  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:33.489102  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:33.489132  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:33.523301  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:33.523319  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:33.590429  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:33.590450  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:36.107174  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:36.117293  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:36.117353  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:36.141455  418823 cri.go:89] found id: ""
	I1210 07:52:36.141469  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.141477  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:36.141482  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:36.141541  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:36.172812  418823 cri.go:89] found id: ""
	I1210 07:52:36.172826  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.172833  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:36.172838  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:36.172901  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:36.201760  418823 cri.go:89] found id: ""
	I1210 07:52:36.201774  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.201781  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:36.201786  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:36.201845  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:36.227525  418823 cri.go:89] found id: ""
	I1210 07:52:36.227539  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.227553  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:36.227558  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:36.227617  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:36.255643  418823 cri.go:89] found id: ""
	I1210 07:52:36.255657  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.255664  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:36.255669  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:36.255729  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:36.281030  418823 cri.go:89] found id: ""
	I1210 07:52:36.281044  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.281052  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:36.281057  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:36.281115  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:36.307190  418823 cri.go:89] found id: ""
	I1210 07:52:36.307204  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.307211  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:36.307219  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:36.307231  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:36.321687  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:36.321705  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:36.383640  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:36.374708   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.375130   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.376841   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.377402   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.379213   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:36.374708   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.375130   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.376841   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.377402   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.379213   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:36.383650  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:36.383672  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:36.452123  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:36.452142  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:36.485724  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:36.485743  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:39.051733  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:39.062052  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:39.062152  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:39.086707  418823 cri.go:89] found id: ""
	I1210 07:52:39.086722  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.086729  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:39.086734  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:39.086793  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:39.111720  418823 cri.go:89] found id: ""
	I1210 07:52:39.111734  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.111742  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:39.111747  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:39.111807  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:39.135349  418823 cri.go:89] found id: ""
	I1210 07:52:39.135364  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.135371  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:39.135376  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:39.135435  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:39.160834  418823 cri.go:89] found id: ""
	I1210 07:52:39.160857  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.160865  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:39.160871  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:39.160938  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:39.189613  418823 cri.go:89] found id: ""
	I1210 07:52:39.189626  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.189634  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:39.189639  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:39.189696  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:39.214373  418823 cri.go:89] found id: ""
	I1210 07:52:39.214387  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.214394  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:39.214400  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:39.214457  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:39.239814  418823 cri.go:89] found id: ""
	I1210 07:52:39.239829  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.239837  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:39.239845  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:39.239856  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:39.304237  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:39.304257  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:39.320565  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:39.320583  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:39.389276  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:39.381128   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.381864   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.383397   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.383829   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.385337   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:39.381128   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.381864   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.383397   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.383829   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.385337   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:39.389286  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:39.389297  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:39.466908  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:39.466930  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:42.005528  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:42.023294  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:42.023367  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:42.058874  418823 cri.go:89] found id: ""
	I1210 07:52:42.058903  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.058911  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:42.058932  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:42.059040  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:42.089784  418823 cri.go:89] found id: ""
	I1210 07:52:42.089801  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.089809  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:42.089814  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:42.089881  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:42.121634  418823 cri.go:89] found id: ""
	I1210 07:52:42.121650  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.121658  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:42.121663  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:42.121737  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:42.153538  418823 cri.go:89] found id: ""
	I1210 07:52:42.153555  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.153563  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:42.153569  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:42.153644  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:42.183586  418823 cri.go:89] found id: ""
	I1210 07:52:42.183603  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.183611  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:42.183619  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:42.183688  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:42.213049  418823 cri.go:89] found id: ""
	I1210 07:52:42.213067  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.213078  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:42.213084  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:42.213165  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:42.242211  418823 cri.go:89] found id: ""
	I1210 07:52:42.242229  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.242241  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:42.242250  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:42.242268  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:42.258546  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:42.258571  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:42.332221  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:42.323428   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.324132   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.325892   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.326478   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.328088   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:42.323428   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.324132   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.325892   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.326478   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.328088   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:42.332230  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:42.332241  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:42.398832  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:42.398851  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:42.439292  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:42.439308  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:45.012889  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:45.052510  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:45.052580  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:45.096465  418823 cri.go:89] found id: ""
	I1210 07:52:45.096488  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.096496  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:45.096501  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:45.096574  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:45.131426  418823 cri.go:89] found id: ""
	I1210 07:52:45.131442  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.131450  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:45.131456  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:45.131530  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:45.179314  418823 cri.go:89] found id: ""
	I1210 07:52:45.179331  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.179340  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:45.179345  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:45.179416  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:45.224508  418823 cri.go:89] found id: ""
	I1210 07:52:45.224525  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.224534  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:45.224540  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:45.224616  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:45.259822  418823 cri.go:89] found id: ""
	I1210 07:52:45.259850  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.259859  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:45.259870  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:45.259980  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:45.289141  418823 cri.go:89] found id: ""
	I1210 07:52:45.289157  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.289164  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:45.289170  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:45.289256  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:45.317720  418823 cri.go:89] found id: ""
	I1210 07:52:45.317749  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.317764  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:45.317796  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:45.317831  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:45.385230  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:45.375821   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.376581   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.378081   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.378688   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.380382   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:45.375821   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.376581   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.378081   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.378688   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.380382   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:45.385240  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:45.385251  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:45.456646  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:45.456667  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:45.489700  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:45.489717  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:45.554187  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:45.554206  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:48.069065  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:48.079822  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:48.079950  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:48.110229  418823 cri.go:89] found id: ""
	I1210 07:52:48.110244  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.110251  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:48.110256  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:48.110317  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:48.138842  418823 cri.go:89] found id: ""
	I1210 07:52:48.138856  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.138864  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:48.138869  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:48.138928  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:48.164708  418823 cri.go:89] found id: ""
	I1210 07:52:48.164722  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.164730  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:48.164735  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:48.164793  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:48.190030  418823 cri.go:89] found id: ""
	I1210 07:52:48.190056  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.190063  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:48.190069  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:48.190160  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:48.214783  418823 cri.go:89] found id: ""
	I1210 07:52:48.214798  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.214824  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:48.214830  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:48.214899  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:48.242669  418823 cri.go:89] found id: ""
	I1210 07:52:48.242684  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.242692  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:48.242697  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:48.242758  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:48.269761  418823 cri.go:89] found id: ""
	I1210 07:52:48.269776  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.269784  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:48.269791  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:48.269802  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:48.334847  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:48.334871  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:48.349781  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:48.349796  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:48.422853  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:48.408060   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.408898   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.411067   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.411847   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.413714   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:48.408060   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.408898   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.411067   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.411847   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.413714   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:48.422867  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:48.422877  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:48.504694  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:48.504717  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:51.036528  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:51.046592  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:51.046665  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:51.073731  418823 cri.go:89] found id: ""
	I1210 07:52:51.073746  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.073753  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:51.073759  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:51.073819  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:51.100005  418823 cri.go:89] found id: ""
	I1210 07:52:51.100019  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.100027  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:51.100031  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:51.100095  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:51.125872  418823 cri.go:89] found id: ""
	I1210 07:52:51.125897  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.125905  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:51.125910  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:51.125970  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:51.151761  418823 cri.go:89] found id: ""
	I1210 07:52:51.151775  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.151783  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:51.151788  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:51.151846  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:51.178046  418823 cri.go:89] found id: ""
	I1210 07:52:51.178060  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.178068  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:51.178074  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:51.178143  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:51.205729  418823 cri.go:89] found id: ""
	I1210 07:52:51.205743  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.205750  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:51.205756  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:51.205813  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:51.231485  418823 cri.go:89] found id: ""
	I1210 07:52:51.231498  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.231505  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:51.231512  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:51.231522  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:51.295749  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:51.295769  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:51.310814  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:51.310832  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:51.374238  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:51.365806   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.366220   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.367678   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.368628   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.370186   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:51.365806   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.366220   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.367678   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.368628   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.370186   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:51.374248  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:51.374260  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:51.442190  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:51.442209  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:53.979674  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:53.989805  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:53.989873  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:54.022480  418823 cri.go:89] found id: ""
	I1210 07:52:54.022494  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.022501  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:54.022507  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:54.022571  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:54.049837  418823 cri.go:89] found id: ""
	I1210 07:52:54.049851  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.049858  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:54.049864  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:54.049924  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:54.079149  418823 cri.go:89] found id: ""
	I1210 07:52:54.079164  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.079172  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:54.079177  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:54.079244  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:54.110317  418823 cri.go:89] found id: ""
	I1210 07:52:54.110332  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.110339  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:54.110344  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:54.110401  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:54.137776  418823 cri.go:89] found id: ""
	I1210 07:52:54.137798  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.137806  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:54.137812  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:54.137873  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:54.162601  418823 cri.go:89] found id: ""
	I1210 07:52:54.162615  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.162622  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:54.162629  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:54.162690  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:54.188677  418823 cri.go:89] found id: ""
	I1210 07:52:54.188691  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.188698  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:54.188706  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:54.188720  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:54.255918  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:54.255940  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:54.270493  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:54.270513  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:54.347104  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:54.330795   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.331491   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.333174   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.333794   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.335404   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:54.330795   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.331491   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.333174   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.333794   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.335404   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:54.347114  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:54.347127  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:54.415651  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:54.415676  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:56.950504  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:56.960908  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:56.960974  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:56.986942  418823 cri.go:89] found id: ""
	I1210 07:52:56.986957  418823 logs.go:282] 0 containers: []
	W1210 07:52:56.986964  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:56.986969  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:56.987046  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:57.014060  418823 cri.go:89] found id: ""
	I1210 07:52:57.014088  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.014095  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:57.014100  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:57.014192  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:57.040046  418823 cri.go:89] found id: ""
	I1210 07:52:57.040061  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.040069  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:57.040075  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:57.040139  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:57.065400  418823 cri.go:89] found id: ""
	I1210 07:52:57.065427  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.065435  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:57.065441  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:57.065511  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:57.094105  418823 cri.go:89] found id: ""
	I1210 07:52:57.094127  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.094135  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:57.094140  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:57.094203  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:57.120409  418823 cri.go:89] found id: ""
	I1210 07:52:57.120425  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.120432  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:57.120438  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:57.120498  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:57.146119  418823 cri.go:89] found id: ""
	I1210 07:52:57.146134  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.146142  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:57.146150  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:57.146160  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:57.160510  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:57.160526  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:57.225221  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:57.216448   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.217062   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.218858   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.219548   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.221235   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:57.216448   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.217062   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.218858   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.219548   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.221235   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:57.225232  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:57.225253  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:57.293765  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:57.293785  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:57.326044  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:57.326061  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:59.896294  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:59.906460  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:59.906522  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:59.930908  418823 cri.go:89] found id: ""
	I1210 07:52:59.930922  418823 logs.go:282] 0 containers: []
	W1210 07:52:59.930930  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:59.930935  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:59.930999  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:59.956028  418823 cri.go:89] found id: ""
	I1210 07:52:59.956042  418823 logs.go:282] 0 containers: []
	W1210 07:52:59.956049  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:59.956054  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:59.956120  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:59.981032  418823 cri.go:89] found id: ""
	I1210 07:52:59.981046  418823 logs.go:282] 0 containers: []
	W1210 07:52:59.981053  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:59.981058  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:59.981116  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:00.027952  418823 cri.go:89] found id: ""
	I1210 07:53:00.027967  418823 logs.go:282] 0 containers: []
	W1210 07:53:00.027975  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:00.027981  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:00.028053  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:00.149242  418823 cri.go:89] found id: ""
	I1210 07:53:00.149275  418823 logs.go:282] 0 containers: []
	W1210 07:53:00.149301  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:00.149308  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:00.149381  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:00.205658  418823 cri.go:89] found id: ""
	I1210 07:53:00.205676  418823 logs.go:282] 0 containers: []
	W1210 07:53:00.205684  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:00.205691  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:00.205842  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:00.272868  418823 cri.go:89] found id: ""
	I1210 07:53:00.272884  418823 logs.go:282] 0 containers: []
	W1210 07:53:00.272892  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:00.272901  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:00.272914  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:00.364734  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:00.349008   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.349679   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.351763   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.352235   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.354014   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:00.349008   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.349679   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.351763   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.352235   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.354014   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:00.364745  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:00.364757  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:00.441561  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:00.441581  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:00.486703  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:00.486722  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:00.551636  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:00.551658  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:03.068015  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:03.078410  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:03.078481  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:03.103362  418823 cri.go:89] found id: ""
	I1210 07:53:03.103378  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.103385  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:03.103391  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:03.103451  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:03.129650  418823 cri.go:89] found id: ""
	I1210 07:53:03.129668  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.129676  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:03.129681  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:03.129753  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:03.156057  418823 cri.go:89] found id: ""
	I1210 07:53:03.156072  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.156079  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:03.156085  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:03.156143  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:03.181869  418823 cri.go:89] found id: ""
	I1210 07:53:03.181895  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.181903  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:03.181908  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:03.181976  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:03.210043  418823 cri.go:89] found id: ""
	I1210 07:53:03.210056  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.210064  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:03.210069  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:03.210148  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:03.234991  418823 cri.go:89] found id: ""
	I1210 07:53:03.235006  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.235046  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:03.235051  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:03.235119  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:03.261578  418823 cri.go:89] found id: ""
	I1210 07:53:03.261605  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.261612  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:03.261620  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:03.261630  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:03.326335  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:03.326355  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:03.340836  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:03.340853  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:03.407609  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:03.398750   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.399755   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.401402   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.401872   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.403636   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:03.398750   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.399755   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.401402   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.401872   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.403636   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:03.407623  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:03.407637  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:03.494941  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:03.494964  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:06.031492  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:06.042260  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:06.042330  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:06.069383  418823 cri.go:89] found id: ""
	I1210 07:53:06.069398  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.069405  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:06.069410  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:06.069471  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:06.095692  418823 cri.go:89] found id: ""
	I1210 07:53:06.095706  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.095713  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:06.095718  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:06.095783  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:06.122565  418823 cri.go:89] found id: ""
	I1210 07:53:06.122579  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.122585  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:06.122590  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:06.122647  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:06.147461  418823 cri.go:89] found id: ""
	I1210 07:53:06.147476  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.147483  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:06.147489  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:06.147549  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:06.172221  418823 cri.go:89] found id: ""
	I1210 07:53:06.172235  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.172243  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:06.172248  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:06.172306  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:06.200403  418823 cri.go:89] found id: ""
	I1210 07:53:06.200417  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.200424  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:06.200429  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:06.200487  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:06.224557  418823 cri.go:89] found id: ""
	I1210 07:53:06.224572  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.224578  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:06.224586  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:06.224597  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:06.285061  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:06.277547   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.278040   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.279487   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.279882   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.281310   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:06.277547   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.278040   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.279487   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.279882   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.281310   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:06.285071  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:06.285082  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:06.351298  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:06.351317  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:06.379592  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:06.379609  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:06.448278  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:06.448298  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:08.966418  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:08.976886  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:08.976953  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:09.010205  418823 cri.go:89] found id: ""
	I1210 07:53:09.010221  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.010248  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:09.010253  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:09.010336  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:09.039128  418823 cri.go:89] found id: ""
	I1210 07:53:09.039143  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.039150  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:09.039155  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:09.039225  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:09.066093  418823 cri.go:89] found id: ""
	I1210 07:53:09.066108  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.066116  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:09.066121  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:09.066218  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:09.091920  418823 cri.go:89] found id: ""
	I1210 07:53:09.091934  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.091948  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:09.091953  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:09.092014  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:09.118286  418823 cri.go:89] found id: ""
	I1210 07:53:09.118301  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.118309  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:09.118314  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:09.118374  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:09.143614  418823 cri.go:89] found id: ""
	I1210 07:53:09.143628  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.143635  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:09.143641  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:09.143705  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:09.168425  418823 cri.go:89] found id: ""
	I1210 07:53:09.168440  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.168447  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:09.168455  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:09.168465  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:09.236920  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:09.236943  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:09.269085  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:09.269103  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:09.339867  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:09.339886  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:09.354523  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:09.354541  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:09.432066  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:09.420805   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.421538   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.423585   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.424589   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.425304   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:09.420805   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.421538   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.423585   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.424589   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.425304   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:11.933763  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:11.943879  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:11.943943  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:11.969555  418823 cri.go:89] found id: ""
	I1210 07:53:11.969578  418823 logs.go:282] 0 containers: []
	W1210 07:53:11.969586  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:11.969591  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:11.969663  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:11.997107  418823 cri.go:89] found id: ""
	I1210 07:53:11.997121  418823 logs.go:282] 0 containers: []
	W1210 07:53:11.997128  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:11.997133  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:11.997198  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:12.025616  418823 cri.go:89] found id: ""
	I1210 07:53:12.025630  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.025638  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:12.025644  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:12.025712  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:12.052893  418823 cri.go:89] found id: ""
	I1210 07:53:12.052906  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.052914  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:12.052919  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:12.052983  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:12.077956  418823 cri.go:89] found id: ""
	I1210 07:53:12.077979  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.077988  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:12.077993  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:12.078064  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:12.104169  418823 cri.go:89] found id: ""
	I1210 07:53:12.104183  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.104200  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:12.104207  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:12.104278  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:12.130790  418823 cri.go:89] found id: ""
	I1210 07:53:12.130804  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.130812  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:12.130819  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:12.130831  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:12.194759  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:12.194778  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:12.209969  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:12.209985  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:12.272708  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:12.265266   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.265649   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.267247   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.267585   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.269013   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:12.265266   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.265649   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.267247   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.267585   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.269013   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:12.272718  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:12.272730  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:12.339739  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:12.339759  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:14.870834  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:14.882996  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:14.883096  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:14.912032  418823 cri.go:89] found id: ""
	I1210 07:53:14.912046  418823 logs.go:282] 0 containers: []
	W1210 07:53:14.912053  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:14.912059  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:14.912116  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:14.937034  418823 cri.go:89] found id: ""
	I1210 07:53:14.937048  418823 logs.go:282] 0 containers: []
	W1210 07:53:14.937056  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:14.937061  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:14.937122  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:14.962165  418823 cri.go:89] found id: ""
	I1210 07:53:14.962180  418823 logs.go:282] 0 containers: []
	W1210 07:53:14.962187  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:14.962192  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:14.962256  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:14.987169  418823 cri.go:89] found id: ""
	I1210 07:53:14.987182  418823 logs.go:282] 0 containers: []
	W1210 07:53:14.987190  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:14.987194  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:14.987250  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:15.026690  418823 cri.go:89] found id: ""
	I1210 07:53:15.026706  418823 logs.go:282] 0 containers: []
	W1210 07:53:15.026714  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:15.026719  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:15.026788  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:15.057882  418823 cri.go:89] found id: ""
	I1210 07:53:15.057896  418823 logs.go:282] 0 containers: []
	W1210 07:53:15.057903  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:15.057908  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:15.057977  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:15.084042  418823 cri.go:89] found id: ""
	I1210 07:53:15.084057  418823 logs.go:282] 0 containers: []
	W1210 07:53:15.084064  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:15.084072  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:15.084082  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:15.114864  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:15.114880  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:15.179901  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:15.179922  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:15.194821  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:15.194838  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:15.259725  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:15.250726   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.251670   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.253338   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.254117   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.255652   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:15.250726   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.251670   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.253338   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.254117   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.255652   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:15.259735  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:15.259747  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:17.826809  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:17.837193  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:17.837254  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:17.863390  418823 cri.go:89] found id: ""
	I1210 07:53:17.863404  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.863411  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:17.863416  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:17.863481  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:17.893221  418823 cri.go:89] found id: ""
	I1210 07:53:17.893236  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.893243  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:17.893248  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:17.893306  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:17.921130  418823 cri.go:89] found id: ""
	I1210 07:53:17.921155  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.921163  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:17.921168  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:17.921236  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:17.945888  418823 cri.go:89] found id: ""
	I1210 07:53:17.945901  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.945909  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:17.945914  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:17.945972  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:17.970988  418823 cri.go:89] found id: ""
	I1210 07:53:17.971002  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.971022  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:17.971027  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:17.971097  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:17.996399  418823 cri.go:89] found id: ""
	I1210 07:53:17.996413  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.996420  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:17.996425  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:17.996494  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:18.023886  418823 cri.go:89] found id: ""
	I1210 07:53:18.023900  418823 logs.go:282] 0 containers: []
	W1210 07:53:18.023908  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:18.023931  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:18.023947  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:18.090117  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:18.090136  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:18.105261  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:18.105280  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:18.174300  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:18.166057   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.166727   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.168471   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.168892   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.170326   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:18.166057   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.166727   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.168471   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.168892   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.170326   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:18.174310  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:18.174322  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:18.241759  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:18.241779  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:20.779144  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:20.788940  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:20.788999  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:20.814543  418823 cri.go:89] found id: ""
	I1210 07:53:20.814557  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.814564  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:20.814569  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:20.814634  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:20.839723  418823 cri.go:89] found id: ""
	I1210 07:53:20.839737  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.839744  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:20.839749  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:20.839808  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:20.869222  418823 cri.go:89] found id: ""
	I1210 07:53:20.869237  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.869244  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:20.869249  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:20.869310  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:20.893562  418823 cri.go:89] found id: ""
	I1210 07:53:20.893576  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.893593  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:20.893598  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:20.893664  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:20.919439  418823 cri.go:89] found id: ""
	I1210 07:53:20.919454  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.919461  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:20.919466  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:20.919526  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:20.947602  418823 cri.go:89] found id: ""
	I1210 07:53:20.947617  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.947624  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:20.947629  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:20.947688  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:20.976621  418823 cri.go:89] found id: ""
	I1210 07:53:20.976635  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.976642  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:20.976650  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:20.976666  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:21.040860  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:21.040884  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:21.055749  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:21.055767  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:21.122414  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:21.114763   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.115197   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.116691   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.116998   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.118463   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:21.114763   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.115197   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.116691   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.116998   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.118463   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:21.122458  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:21.122468  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:21.188312  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:21.188333  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:23.717609  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:23.730817  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:23.730882  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:23.756488  418823 cri.go:89] found id: ""
	I1210 07:53:23.756504  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.756512  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:23.756518  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:23.756584  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:23.782540  418823 cri.go:89] found id: ""
	I1210 07:53:23.782555  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.782562  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:23.782567  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:23.782626  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:23.807181  418823 cri.go:89] found id: ""
	I1210 07:53:23.807195  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.807204  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:23.807209  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:23.807273  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:23.831876  418823 cri.go:89] found id: ""
	I1210 07:53:23.831891  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.831900  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:23.831905  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:23.831964  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:23.858557  418823 cri.go:89] found id: ""
	I1210 07:53:23.858572  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.858580  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:23.858585  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:23.858646  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:23.883797  418823 cri.go:89] found id: ""
	I1210 07:53:23.883811  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.883820  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:23.883825  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:23.883922  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:23.913668  418823 cri.go:89] found id: ""
	I1210 07:53:23.913682  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.913690  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:23.913698  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:23.913709  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:23.977126  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:23.968779   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.969424   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.971050   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.971601   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.973232   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:23.968779   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.969424   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.971050   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.971601   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.973232   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:23.977136  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:23.977147  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:24.045089  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:24.045110  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:24.076143  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:24.076161  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:24.142779  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:24.142798  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:26.658408  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:26.669312  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:26.669374  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:26.697592  418823 cri.go:89] found id: ""
	I1210 07:53:26.697607  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.697615  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:26.697621  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:26.697687  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:26.725323  418823 cri.go:89] found id: ""
	I1210 07:53:26.725363  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.725370  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:26.725375  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:26.725433  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:26.754039  418823 cri.go:89] found id: ""
	I1210 07:53:26.754053  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.754060  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:26.754066  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:26.754122  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:26.788322  418823 cri.go:89] found id: ""
	I1210 07:53:26.788337  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.788344  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:26.788349  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:26.788408  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:26.818143  418823 cri.go:89] found id: ""
	I1210 07:53:26.818157  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.818180  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:26.818185  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:26.818246  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:26.845686  418823 cri.go:89] found id: ""
	I1210 07:53:26.845699  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.845707  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:26.845714  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:26.845772  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:26.871522  418823 cri.go:89] found id: ""
	I1210 07:53:26.871536  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.871544  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:26.871552  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:26.871568  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:26.902527  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:26.902544  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:26.967583  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:26.967603  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:26.982258  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:26.982275  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:27.053700  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:27.045382   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.046023   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.047684   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.048159   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.049667   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:27.045382   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.046023   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.047684   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.048159   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.049667   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:27.053710  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:27.053722  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:29.623259  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:29.633196  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:29.633265  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:29.658246  418823 cri.go:89] found id: ""
	I1210 07:53:29.658271  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.658278  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:29.658283  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:29.658358  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:29.685747  418823 cri.go:89] found id: ""
	I1210 07:53:29.685762  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.685769  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:29.685775  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:29.685842  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:29.721266  418823 cri.go:89] found id: ""
	I1210 07:53:29.721280  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.721288  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:29.721292  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:29.721350  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:29.746632  418823 cri.go:89] found id: ""
	I1210 07:53:29.746647  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.746655  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:29.746660  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:29.746718  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:29.771709  418823 cri.go:89] found id: ""
	I1210 07:53:29.771725  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.771732  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:29.771737  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:29.771800  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:29.801580  418823 cri.go:89] found id: ""
	I1210 07:53:29.801595  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.801602  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:29.801608  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:29.801673  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:29.827750  418823 cri.go:89] found id: ""
	I1210 07:53:29.827764  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.827771  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:29.827780  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:29.827795  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:29.893437  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:29.885346   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.885722   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.887338   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.887997   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.889654   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:29.885346   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.885722   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.887338   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.887997   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.889654   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:29.893447  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:29.893458  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:29.960399  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:29.960419  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:29.991781  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:29.991799  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:30.072819  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:30.072841  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:32.588396  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:32.598821  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:32.598882  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:32.628590  418823 cri.go:89] found id: ""
	I1210 07:53:32.628604  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.628611  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:32.628616  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:32.628678  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:32.658338  418823 cri.go:89] found id: ""
	I1210 07:53:32.658352  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.658359  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:32.658364  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:32.658424  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:32.701705  418823 cri.go:89] found id: ""
	I1210 07:53:32.701719  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.701727  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:32.701732  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:32.701792  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:32.735461  418823 cri.go:89] found id: ""
	I1210 07:53:32.735476  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.735483  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:32.735488  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:32.735548  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:32.761096  418823 cri.go:89] found id: ""
	I1210 07:53:32.761109  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.761116  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:32.761121  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:32.761180  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:32.787468  418823 cri.go:89] found id: ""
	I1210 07:53:32.787481  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.787488  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:32.787493  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:32.787553  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:32.813085  418823 cri.go:89] found id: ""
	I1210 07:53:32.813098  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.813105  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:32.813113  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:32.813123  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:32.881504  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:32.873323   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.874057   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.875714   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.876057   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.877280   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:32.873323   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.874057   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.875714   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.876057   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.877280   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:32.881541  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:32.881552  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:32.951245  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:32.951265  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:32.980096  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:32.980113  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:33.046381  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:33.046400  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:35.561454  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:35.571515  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:35.571579  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:35.596461  418823 cri.go:89] found id: ""
	I1210 07:53:35.596476  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.596483  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:35.596488  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:35.596547  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:35.623764  418823 cri.go:89] found id: ""
	I1210 07:53:35.623780  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.623787  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:35.623792  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:35.623852  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:35.649136  418823 cri.go:89] found id: ""
	I1210 07:53:35.649150  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.649159  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:35.649164  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:35.649267  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:35.689785  418823 cri.go:89] found id: ""
	I1210 07:53:35.689799  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.689806  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:35.689820  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:35.689883  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:35.717073  418823 cri.go:89] found id: ""
	I1210 07:53:35.717086  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.717104  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:35.717109  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:35.717167  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:35.747852  418823 cri.go:89] found id: ""
	I1210 07:53:35.747866  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.747874  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:35.747879  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:35.747936  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:35.772479  418823 cri.go:89] found id: ""
	I1210 07:53:35.772493  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.772500  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:35.772508  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:35.772519  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:35.843052  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:35.843075  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:35.857842  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:35.857859  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:35.927434  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:35.918703   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.919740   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.921421   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.921929   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.923509   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:35.918703   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.919740   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.921421   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.921929   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.923509   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:35.927445  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:35.927457  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:35.996278  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:35.996299  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:38.532848  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:38.543645  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:38.543706  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:38.573367  418823 cri.go:89] found id: ""
	I1210 07:53:38.573382  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.573389  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:38.573394  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:38.573456  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:38.603108  418823 cri.go:89] found id: ""
	I1210 07:53:38.603122  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.603129  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:38.603134  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:38.603193  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:38.629381  418823 cri.go:89] found id: ""
	I1210 07:53:38.629395  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.629402  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:38.629407  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:38.629467  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:38.662313  418823 cri.go:89] found id: ""
	I1210 07:53:38.662327  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.662334  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:38.662339  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:38.662402  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:38.704257  418823 cri.go:89] found id: ""
	I1210 07:53:38.704271  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.704279  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:38.704284  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:38.704346  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:38.734287  418823 cri.go:89] found id: ""
	I1210 07:53:38.734302  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.734309  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:38.734315  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:38.734375  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:38.760452  418823 cri.go:89] found id: ""
	I1210 07:53:38.760467  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.760474  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:38.760483  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:38.760493  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:38.827227  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:38.827248  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:38.841994  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:38.842011  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:38.909535  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:38.899398   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.900949   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.901647   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.903592   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.904308   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:38.899398   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.900949   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.901647   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.903592   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.904308   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:38.909548  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:38.909559  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:38.977890  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:38.977912  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:41.514495  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:41.524880  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:41.524939  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:41.550178  418823 cri.go:89] found id: ""
	I1210 07:53:41.550208  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.550216  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:41.550220  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:41.550289  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:41.578068  418823 cri.go:89] found id: ""
	I1210 07:53:41.578090  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.578097  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:41.578102  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:41.578175  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:41.603754  418823 cri.go:89] found id: ""
	I1210 07:53:41.603768  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.603776  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:41.603782  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:41.603840  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:41.628986  418823 cri.go:89] found id: ""
	I1210 07:53:41.629000  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.629008  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:41.629013  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:41.629072  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:41.654287  418823 cri.go:89] found id: ""
	I1210 07:53:41.654302  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.654309  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:41.654314  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:41.654384  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:41.688416  418823 cri.go:89] found id: ""
	I1210 07:53:41.688430  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.688437  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:41.688442  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:41.688498  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:41.713499  418823 cri.go:89] found id: ""
	I1210 07:53:41.713513  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.713521  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:41.713528  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:41.713538  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:41.730410  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:41.730426  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:41.799336  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:41.790498   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.791386   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.793149   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.793773   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.794989   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:41.790498   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.791386   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.793149   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.793773   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.794989   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:41.799346  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:41.799357  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:41.867347  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:41.867369  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:41.895652  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:41.895669  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:44.462932  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:44.472795  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:44.472854  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:44.504932  418823 cri.go:89] found id: ""
	I1210 07:53:44.504947  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.504960  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:44.504965  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:44.505025  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:44.535103  418823 cri.go:89] found id: ""
	I1210 07:53:44.535125  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.535133  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:44.535138  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:44.535204  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:44.560225  418823 cri.go:89] found id: ""
	I1210 07:53:44.560239  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.560247  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:44.560252  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:44.560310  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:44.585575  418823 cri.go:89] found id: ""
	I1210 07:53:44.585597  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.585604  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:44.585609  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:44.585668  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:44.611737  418823 cri.go:89] found id: ""
	I1210 07:53:44.611751  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.611758  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:44.611763  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:44.611824  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:44.636495  418823 cri.go:89] found id: ""
	I1210 07:53:44.636510  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.636517  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:44.636522  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:44.636580  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:44.665441  418823 cri.go:89] found id: ""
	I1210 07:53:44.665455  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.665463  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:44.665471  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:44.665481  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:44.702032  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:44.702048  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:44.776362  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:44.776383  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:44.792240  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:44.792256  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:44.854270  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:44.846404   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.846945   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.848504   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.848953   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.850457   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:44.846404   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.846945   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.848504   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.848953   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.850457   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:44.854279  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:44.854291  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:47.423978  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:47.436858  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:47.436919  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:47.461997  418823 cri.go:89] found id: ""
	I1210 07:53:47.462011  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.462018  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:47.462023  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:47.462125  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:47.487419  418823 cri.go:89] found id: ""
	I1210 07:53:47.487434  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.487441  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:47.487446  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:47.487504  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:47.512823  418823 cri.go:89] found id: ""
	I1210 07:53:47.512837  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.512845  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:47.512850  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:47.512913  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:47.538819  418823 cri.go:89] found id: ""
	I1210 07:53:47.538833  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.538840  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:47.538845  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:47.538903  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:47.563454  418823 cri.go:89] found id: ""
	I1210 07:53:47.563468  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.563476  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:47.563481  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:47.563544  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:47.588347  418823 cri.go:89] found id: ""
	I1210 07:53:47.588361  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.588368  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:47.588374  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:47.588435  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:47.613835  418823 cri.go:89] found id: ""
	I1210 07:53:47.613848  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.613855  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:47.613863  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:47.613874  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:47.679468  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:47.679488  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:47.695124  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:47.695148  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:47.764330  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:47.755921   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.756582   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.758176   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.758693   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.760262   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:47.755921   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.756582   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.758176   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.758693   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.760262   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:47.764340  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:47.764350  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:47.834926  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:47.834946  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:50.366762  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:50.376894  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:50.376958  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:50.402825  418823 cri.go:89] found id: ""
	I1210 07:53:50.402839  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.402846  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:50.402851  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:50.402912  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:50.431663  418823 cri.go:89] found id: ""
	I1210 07:53:50.431677  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.431685  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:50.431690  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:50.431748  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:50.458799  418823 cri.go:89] found id: ""
	I1210 07:53:50.458813  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.458821  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:50.458826  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:50.458885  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:50.483609  418823 cri.go:89] found id: ""
	I1210 07:53:50.483623  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.483630  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:50.483635  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:50.483693  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:50.509720  418823 cri.go:89] found id: ""
	I1210 07:53:50.509735  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.509743  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:50.509748  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:50.509808  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:50.535475  418823 cri.go:89] found id: ""
	I1210 07:53:50.535489  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.535496  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:50.535501  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:50.535560  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:50.559559  418823 cri.go:89] found id: ""
	I1210 07:53:50.559572  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.559580  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:50.559587  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:50.559598  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:50.624409  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:50.624430  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:50.639099  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:50.639117  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:50.734659  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:50.717698   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.718879   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.720716   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.721025   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.727758   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:50.717698   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.718879   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.720716   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.721025   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.727758   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:50.734673  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:50.734686  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:50.801764  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:50.801789  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:53.334554  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:53.344704  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:53.344767  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:53.369027  418823 cri.go:89] found id: ""
	I1210 07:53:53.369041  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.369049  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:53.369054  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:53.369112  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:53.392884  418823 cri.go:89] found id: ""
	I1210 07:53:53.392897  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.392904  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:53.392909  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:53.392967  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:53.421604  418823 cri.go:89] found id: ""
	I1210 07:53:53.421618  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.421625  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:53.421630  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:53.421690  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:53.446954  418823 cri.go:89] found id: ""
	I1210 07:53:53.446968  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.446976  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:53.446982  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:53.447078  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:53.472681  418823 cri.go:89] found id: ""
	I1210 07:53:53.472696  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.472703  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:53.472708  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:53.472769  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:53.497847  418823 cri.go:89] found id: ""
	I1210 07:53:53.497861  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.497868  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:53.497873  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:53.497934  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:53.524109  418823 cri.go:89] found id: ""
	I1210 07:53:53.524123  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.524131  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:53.524138  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:53.524149  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:53.593506  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:53.593527  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:53.607933  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:53.607950  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:53.678735  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:53.669132   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.671186   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.671527   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.673119   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.673744   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:53.669132   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.671186   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.671527   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.673119   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.673744   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:53.678745  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:53.678755  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:53.752843  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:53.752865  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:56.287368  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:56.297545  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:56.297605  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:56.327438  418823 cri.go:89] found id: ""
	I1210 07:53:56.327452  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.327459  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:56.327465  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:56.327525  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:56.357601  418823 cri.go:89] found id: ""
	I1210 07:53:56.357616  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.357623  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:56.357627  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:56.357686  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:56.382796  418823 cri.go:89] found id: ""
	I1210 07:53:56.382810  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.382817  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:56.382822  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:56.382878  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:56.410018  418823 cri.go:89] found id: ""
	I1210 07:53:56.410032  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.410039  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:56.410050  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:56.410110  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:56.437449  418823 cri.go:89] found id: ""
	I1210 07:53:56.437472  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.437480  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:56.437485  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:56.437551  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:56.462063  418823 cri.go:89] found id: ""
	I1210 07:53:56.462077  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.462096  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:56.462102  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:56.462178  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:56.489728  418823 cri.go:89] found id: ""
	I1210 07:53:56.489743  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.489750  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:56.489757  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:56.489771  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:56.504129  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:56.504145  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:56.569498  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:56.561896   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.562381   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.563902   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.564206   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.565675   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:56.561896   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.562381   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.563902   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.564206   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.565675   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:56.569507  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:56.569518  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:56.638285  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:56.638304  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:56.676473  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:56.676490  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:59.250249  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:59.260346  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:59.260407  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:59.288615  418823 cri.go:89] found id: ""
	I1210 07:53:59.288633  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.288640  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:59.288645  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:59.288707  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:59.314559  418823 cri.go:89] found id: ""
	I1210 07:53:59.314574  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.314581  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:59.314586  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:59.314652  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:59.339212  418823 cri.go:89] found id: ""
	I1210 07:53:59.339227  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.339235  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:59.339240  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:59.339296  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:59.365478  418823 cri.go:89] found id: ""
	I1210 07:53:59.365493  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.365500  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:59.365505  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:59.365565  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:59.391116  418823 cri.go:89] found id: ""
	I1210 07:53:59.391131  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.391138  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:59.391143  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:59.391204  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:59.417133  418823 cri.go:89] found id: ""
	I1210 07:53:59.417153  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.417161  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:59.417166  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:59.417225  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:59.442940  418823 cri.go:89] found id: ""
	I1210 07:53:59.442954  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.442961  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:59.442968  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:59.442979  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:59.509257  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:59.509277  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:59.541319  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:59.541335  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:59.607451  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:59.607470  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:59.621934  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:59.621951  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:59.693437  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:59.684845   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.685597   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.687307   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.687819   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.689392   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:59.684845   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.685597   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.687307   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.687819   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.689392   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:02.193693  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:02.204795  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:02.204860  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:02.230168  418823 cri.go:89] found id: ""
	I1210 07:54:02.230185  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.230192  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:02.230198  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:02.230311  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:02.263333  418823 cri.go:89] found id: ""
	I1210 07:54:02.263349  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.263356  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:02.263361  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:02.263426  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:02.290361  418823 cri.go:89] found id: ""
	I1210 07:54:02.290376  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.290384  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:02.290388  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:02.290448  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:02.316861  418823 cri.go:89] found id: ""
	I1210 07:54:02.316875  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.316882  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:02.316894  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:02.316951  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:02.343227  418823 cri.go:89] found id: ""
	I1210 07:54:02.343242  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.343250  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:02.343255  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:02.343319  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:02.370541  418823 cri.go:89] found id: ""
	I1210 07:54:02.370555  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.370562  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:02.370567  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:02.370655  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:02.397479  418823 cri.go:89] found id: ""
	I1210 07:54:02.397493  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.397500  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:02.397508  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:02.397522  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:02.463725  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:02.463746  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:02.478295  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:02.478312  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:02.550548  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:02.542348   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.542913   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.544446   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.544888   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.546428   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:02.542348   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.542913   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.544446   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.544888   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.546428   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:02.550558  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:02.550569  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:02.620103  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:02.620125  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:05.149959  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:05.160417  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:05.160482  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:05.189797  418823 cri.go:89] found id: ""
	I1210 07:54:05.189812  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.189826  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:05.189831  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:05.189890  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:05.217788  418823 cri.go:89] found id: ""
	I1210 07:54:05.217815  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.217823  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:05.217828  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:05.217893  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:05.243664  418823 cri.go:89] found id: ""
	I1210 07:54:05.243678  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.243686  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:05.243690  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:05.243749  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:05.269052  418823 cri.go:89] found id: ""
	I1210 07:54:05.269067  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.269075  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:05.269080  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:05.269140  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:05.294538  418823 cri.go:89] found id: ""
	I1210 07:54:05.294552  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.294559  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:05.294564  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:05.294627  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:05.321865  418823 cri.go:89] found id: ""
	I1210 07:54:05.321880  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.321887  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:05.321893  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:05.321954  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:05.348181  418823 cri.go:89] found id: ""
	I1210 07:54:05.348195  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.348203  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:05.348210  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:05.348225  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:05.379036  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:05.379062  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:05.443960  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:05.443981  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:05.458603  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:05.458620  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:05.526883  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:05.519093   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.519877   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.521585   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.522069   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.523123   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:05.519093   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.519877   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.521585   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.522069   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.523123   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:05.526895  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:05.526910  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:08.095997  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:08.105932  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:08.105991  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:08.130974  418823 cri.go:89] found id: ""
	I1210 07:54:08.130988  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.130996  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:08.131001  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:08.131153  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:08.155374  418823 cri.go:89] found id: ""
	I1210 07:54:08.155388  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.155396  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:08.155401  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:08.155458  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:08.180878  418823 cri.go:89] found id: ""
	I1210 07:54:08.180892  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.180899  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:08.180904  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:08.180962  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:08.209651  418823 cri.go:89] found id: ""
	I1210 07:54:08.209664  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.209672  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:08.209676  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:08.209735  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:08.235331  418823 cri.go:89] found id: ""
	I1210 07:54:08.235344  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.235358  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:08.235362  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:08.235421  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:08.260980  418823 cri.go:89] found id: ""
	I1210 07:54:08.260995  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.261003  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:08.261008  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:08.261066  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:08.286809  418823 cri.go:89] found id: ""
	I1210 07:54:08.286824  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.286831  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:08.286838  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:08.286848  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:08.353470  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:08.353491  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:08.367911  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:08.367928  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:08.434091  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:08.426418   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.426959   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.428418   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.428869   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.430318   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:08.426418   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.426959   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.428418   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.428869   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.430318   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:08.434101  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:08.434120  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:08.502201  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:08.502221  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:11.031209  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:11.041439  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:11.041500  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:11.067253  418823 cri.go:89] found id: ""
	I1210 07:54:11.067268  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.067275  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:11.067280  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:11.067339  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:11.092951  418823 cri.go:89] found id: ""
	I1210 07:54:11.092965  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.092972  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:11.092978  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:11.093038  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:11.118430  418823 cri.go:89] found id: ""
	I1210 07:54:11.118445  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.118453  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:11.118458  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:11.118520  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:11.144820  418823 cri.go:89] found id: ""
	I1210 07:54:11.144835  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.144843  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:11.144848  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:11.144914  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:11.173374  418823 cri.go:89] found id: ""
	I1210 07:54:11.173388  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.173396  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:11.173401  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:11.173459  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:11.198352  418823 cri.go:89] found id: ""
	I1210 07:54:11.198367  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.198375  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:11.198380  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:11.198450  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:11.224536  418823 cri.go:89] found id: ""
	I1210 07:54:11.224550  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.224559  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:11.224569  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:11.224579  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:11.290262  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:11.290283  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:11.304639  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:11.304658  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:11.368924  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:11.359948   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.360716   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.362347   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.362996   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.364714   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:11.359948   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.360716   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.362347   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.362996   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.364714   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:11.368934  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:11.368944  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:11.435589  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:11.435610  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:13.966356  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:13.976957  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:13.977022  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:14.004519  418823 cri.go:89] found id: ""
	I1210 07:54:14.004536  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.004546  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:14.004551  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:14.004633  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:14.033357  418823 cri.go:89] found id: ""
	I1210 07:54:14.033372  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.033380  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:14.033385  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:14.033445  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:14.059488  418823 cri.go:89] found id: ""
	I1210 07:54:14.059510  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.059517  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:14.059522  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:14.059585  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:14.087964  418823 cri.go:89] found id: ""
	I1210 07:54:14.087987  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.087996  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:14.088002  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:14.088073  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:14.114469  418823 cri.go:89] found id: ""
	I1210 07:54:14.114483  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.114501  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:14.114507  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:14.114580  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:14.144394  418823 cri.go:89] found id: ""
	I1210 07:54:14.144408  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.144415  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:14.144420  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:14.144482  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:14.173724  418823 cri.go:89] found id: ""
	I1210 07:54:14.173746  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.173754  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:14.173762  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:14.173779  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:14.247855  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:14.239156   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.239953   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.241733   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.242311   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.243958   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:14.239156   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.239953   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.241733   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.242311   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.243958   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:14.247865  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:14.247879  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:14.317778  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:14.317798  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:14.346568  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:14.346586  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:14.412678  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:14.412697  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:16.927406  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:16.938842  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:16.938903  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:16.972184  418823 cri.go:89] found id: ""
	I1210 07:54:16.972197  418823 logs.go:282] 0 containers: []
	W1210 07:54:16.972204  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:16.972209  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:16.972268  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:16.999114  418823 cri.go:89] found id: ""
	I1210 07:54:16.999129  418823 logs.go:282] 0 containers: []
	W1210 07:54:16.999136  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:16.999141  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:16.999204  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:17.026900  418823 cri.go:89] found id: ""
	I1210 07:54:17.026913  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.026921  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:17.026926  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:17.026985  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:17.053121  418823 cri.go:89] found id: ""
	I1210 07:54:17.053135  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.053143  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:17.053148  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:17.053208  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:17.079184  418823 cri.go:89] found id: ""
	I1210 07:54:17.079198  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.079204  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:17.079209  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:17.079268  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:17.104597  418823 cri.go:89] found id: ""
	I1210 07:54:17.104611  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.104619  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:17.104624  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:17.104681  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:17.133412  418823 cri.go:89] found id: ""
	I1210 07:54:17.133426  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.133434  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:17.133441  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:17.133452  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:17.147432  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:17.147452  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:17.210612  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:17.202657   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.203522   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.205130   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.205458   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.206975   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:17.202657   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.203522   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.205130   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.205458   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.206975   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:17.210623  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:17.210634  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:17.279473  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:17.279493  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:17.307828  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:17.307852  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:19.881299  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:19.891315  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:19.891375  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:19.926287  418823 cri.go:89] found id: ""
	I1210 07:54:19.926302  418823 logs.go:282] 0 containers: []
	W1210 07:54:19.926309  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:19.926314  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:19.926373  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:19.961020  418823 cri.go:89] found id: ""
	I1210 07:54:19.961036  418823 logs.go:282] 0 containers: []
	W1210 07:54:19.961043  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:19.961048  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:19.961111  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:19.994369  418823 cri.go:89] found id: ""
	I1210 07:54:19.994383  418823 logs.go:282] 0 containers: []
	W1210 07:54:19.994390  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:19.994395  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:19.994455  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:20.028896  418823 cri.go:89] found id: ""
	I1210 07:54:20.028911  418823 logs.go:282] 0 containers: []
	W1210 07:54:20.028919  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:20.028924  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:20.028989  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:20.059934  418823 cri.go:89] found id: ""
	I1210 07:54:20.059955  418823 logs.go:282] 0 containers: []
	W1210 07:54:20.059963  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:20.060015  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:20.060093  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:20.086606  418823 cri.go:89] found id: ""
	I1210 07:54:20.086622  418823 logs.go:282] 0 containers: []
	W1210 07:54:20.086629  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:20.086635  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:20.086703  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:20.112469  418823 cri.go:89] found id: ""
	I1210 07:54:20.112486  418823 logs.go:282] 0 containers: []
	W1210 07:54:20.112496  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:20.112504  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:20.112515  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:20.176933  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:20.176953  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:20.193125  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:20.193142  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:20.257603  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:20.249385   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.249848   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.251552   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.251918   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.253488   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:20.249385   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.249848   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.251552   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.251918   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.253488   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:20.257614  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:20.257625  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:20.324617  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:20.324638  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:22.853766  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:22.864101  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:22.864164  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:22.888959  418823 cri.go:89] found id: ""
	I1210 07:54:22.888974  418823 logs.go:282] 0 containers: []
	W1210 07:54:22.888981  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:22.888986  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:22.889046  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:22.921447  418823 cri.go:89] found id: ""
	I1210 07:54:22.921460  418823 logs.go:282] 0 containers: []
	W1210 07:54:22.921468  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:22.921473  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:22.921543  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:22.955505  418823 cri.go:89] found id: ""
	I1210 07:54:22.955519  418823 logs.go:282] 0 containers: []
	W1210 07:54:22.955526  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:22.955531  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:22.955594  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:22.986982  418823 cri.go:89] found id: ""
	I1210 07:54:22.986996  418823 logs.go:282] 0 containers: []
	W1210 07:54:22.987004  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:22.987031  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:22.987094  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:23.016264  418823 cri.go:89] found id: ""
	I1210 07:54:23.016279  418823 logs.go:282] 0 containers: []
	W1210 07:54:23.016286  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:23.016291  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:23.016354  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:23.046460  418823 cri.go:89] found id: ""
	I1210 07:54:23.046474  418823 logs.go:282] 0 containers: []
	W1210 07:54:23.046482  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:23.046507  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:23.046577  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:23.074337  418823 cri.go:89] found id: ""
	I1210 07:54:23.074352  418823 logs.go:282] 0 containers: []
	W1210 07:54:23.074361  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:23.074369  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:23.074384  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:23.139358  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:23.139380  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:23.154211  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:23.154233  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:23.215488  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:23.206516   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.207119   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.208073   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.209680   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.210273   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:23.206516   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.207119   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.208073   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.209680   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.210273   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:23.215499  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:23.215512  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:23.282950  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:23.282971  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:25.812054  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:25.822192  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:25.822255  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:25.847807  418823 cri.go:89] found id: ""
	I1210 07:54:25.847822  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.847831  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:25.847836  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:25.847900  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:25.876611  418823 cri.go:89] found id: ""
	I1210 07:54:25.876626  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.876634  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:25.876638  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:25.876698  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:25.902947  418823 cri.go:89] found id: ""
	I1210 07:54:25.902961  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.902968  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:25.902973  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:25.903056  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:25.944041  418823 cri.go:89] found id: ""
	I1210 07:54:25.944055  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.944062  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:25.944068  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:25.944128  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:25.970835  418823 cri.go:89] found id: ""
	I1210 07:54:25.970849  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.970857  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:25.970862  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:25.970923  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:26.003198  418823 cri.go:89] found id: ""
	I1210 07:54:26.003214  418823 logs.go:282] 0 containers: []
	W1210 07:54:26.003222  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:26.003228  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:26.003300  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:26.032526  418823 cri.go:89] found id: ""
	I1210 07:54:26.032540  418823 logs.go:282] 0 containers: []
	W1210 07:54:26.032548  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:26.032556  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:26.032569  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:26.099635  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:26.099655  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:26.114354  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:26.114373  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:26.179258  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:26.170492   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.171349   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.173076   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.173401   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.174907   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:26.170492   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.171349   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.173076   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.173401   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.174907   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:26.179269  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:26.179281  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:26.248336  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:26.248355  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:28.782480  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:28.792391  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:28.792450  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:28.817311  418823 cri.go:89] found id: ""
	I1210 07:54:28.817325  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.817332  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:28.817338  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:28.817393  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:28.841584  418823 cri.go:89] found id: ""
	I1210 07:54:28.841597  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.841605  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:28.841609  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:28.841666  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:28.867004  418823 cri.go:89] found id: ""
	I1210 07:54:28.867040  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.867048  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:28.867052  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:28.867110  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:28.891591  418823 cri.go:89] found id: ""
	I1210 07:54:28.891604  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.891615  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:28.891621  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:28.891677  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:28.927624  418823 cri.go:89] found id: ""
	I1210 07:54:28.927637  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.927645  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:28.927650  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:28.927714  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:28.955409  418823 cri.go:89] found id: ""
	I1210 07:54:28.955423  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.955430  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:28.955435  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:28.955493  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:28.980779  418823 cri.go:89] found id: ""
	I1210 07:54:28.980794  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.980801  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:28.980808  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:28.980819  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:28.995862  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:28.995878  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:29.065674  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:29.057420   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.058224   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.059795   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.060097   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.061321   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:29.057420   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.058224   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.059795   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.060097   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.061321   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:29.065683  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:29.065695  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:29.133594  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:29.133615  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:29.165522  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:29.165539  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:31.733707  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:31.743741  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:31.743803  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:31.768618  418823 cri.go:89] found id: ""
	I1210 07:54:31.768633  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.768647  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:31.768652  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:31.768712  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:31.797641  418823 cri.go:89] found id: ""
	I1210 07:54:31.797656  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.797663  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:31.797668  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:31.797729  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:31.823152  418823 cri.go:89] found id: ""
	I1210 07:54:31.823166  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.823174  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:31.823178  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:31.823241  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:31.849644  418823 cri.go:89] found id: ""
	I1210 07:54:31.849659  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.849666  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:31.849671  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:31.849735  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:31.877522  418823 cri.go:89] found id: ""
	I1210 07:54:31.877545  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.877553  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:31.877558  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:31.877625  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:31.903129  418823 cri.go:89] found id: ""
	I1210 07:54:31.903142  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.903150  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:31.903155  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:31.903212  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:31.941362  418823 cri.go:89] found id: ""
	I1210 07:54:31.941376  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.941383  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:31.941391  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:31.941402  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:32.025544  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:32.025566  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:32.040949  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:32.040969  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:32.110721  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:32.101989   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.102773   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.104458   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.104789   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.106701   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:32.101989   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.102773   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.104458   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.104789   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.106701   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:32.110732  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:32.110743  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:32.178647  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:32.178670  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:34.707070  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:34.717245  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:34.717310  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:34.745693  418823 cri.go:89] found id: ""
	I1210 07:54:34.745707  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.745714  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:34.745726  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:34.745790  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:34.771395  418823 cri.go:89] found id: ""
	I1210 07:54:34.771409  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.771416  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:34.771421  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:34.771479  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:34.797775  418823 cri.go:89] found id: ""
	I1210 07:54:34.797788  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.797796  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:34.797801  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:34.797861  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:34.825083  418823 cri.go:89] found id: ""
	I1210 07:54:34.825100  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.825107  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:34.825112  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:34.825177  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:34.850864  418823 cri.go:89] found id: ""
	I1210 07:54:34.850879  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.850896  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:34.850901  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:34.850975  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:34.875132  418823 cri.go:89] found id: ""
	I1210 07:54:34.875146  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.875154  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:34.875159  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:34.875227  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:34.899938  418823 cri.go:89] found id: ""
	I1210 07:54:34.899953  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.899970  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:34.899979  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:34.899990  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:34.923898  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:34.923916  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:35.004342  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:34.994993   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.995801   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.997355   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.997871   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.999627   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:34.994993   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.995801   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.997355   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.997871   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.999627   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:35.004372  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:35.004385  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:35.076257  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:35.076279  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:35.104842  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:35.104858  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:37.672039  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:37.681946  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:37.682009  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:37.706328  418823 cri.go:89] found id: ""
	I1210 07:54:37.706342  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.706349  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:37.706354  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:37.706420  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:37.731157  418823 cri.go:89] found id: ""
	I1210 07:54:37.731171  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.731179  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:37.731183  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:37.731243  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:37.756672  418823 cri.go:89] found id: ""
	I1210 07:54:37.756686  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.756693  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:37.756698  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:37.756758  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:37.782323  418823 cri.go:89] found id: ""
	I1210 07:54:37.782337  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.782344  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:37.782349  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:37.782407  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:37.809398  418823 cri.go:89] found id: ""
	I1210 07:54:37.809411  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.809425  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:37.809430  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:37.809488  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:37.834279  418823 cri.go:89] found id: ""
	I1210 07:54:37.834300  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.834307  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:37.834311  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:37.834378  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:37.860329  418823 cri.go:89] found id: ""
	I1210 07:54:37.860343  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.860351  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:37.860359  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:37.860369  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:37.933541  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:37.923454   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.924390   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.926115   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.926751   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.928659   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:37.923454   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.924390   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.926115   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.926751   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.928659   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:37.933553  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:37.933564  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:38.012971  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:38.012996  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:38.049266  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:38.049284  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:38.124985  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:38.125006  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:40.640115  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:40.651783  418823 kubeadm.go:602] duration metric: took 4m3.269334188s to restartPrimaryControlPlane
	W1210 07:54:40.651842  418823 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 07:54:40.651915  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 07:54:41.061132  418823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:54:41.073851  418823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:54:41.081733  418823 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:54:41.081788  418823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:54:41.089443  418823 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:54:41.089453  418823 kubeadm.go:158] found existing configuration files:
	
	I1210 07:54:41.089505  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 07:54:41.097510  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:54:41.097570  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:54:41.105078  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 07:54:41.112622  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:54:41.112682  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:54:41.120112  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 07:54:41.127831  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:54:41.127887  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:54:41.135843  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 07:54:41.143605  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:54:41.143662  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:54:41.150893  418823 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:54:41.188283  418823 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:54:41.188576  418823 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:54:41.266308  418823 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:54:41.266369  418823 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:54:41.266407  418823 kubeadm.go:319] OS: Linux
	I1210 07:54:41.266448  418823 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:54:41.266493  418823 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:54:41.266536  418823 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:54:41.266581  418823 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:54:41.266627  418823 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:54:41.266672  418823 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:54:41.266714  418823 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:54:41.266758  418823 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:54:41.266801  418823 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:54:41.327793  418823 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:54:41.327890  418823 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:54:41.327975  418823 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:54:41.335492  418823 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:54:41.340870  418823 out.go:252]   - Generating certificates and keys ...
	I1210 07:54:41.340961  418823 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:54:41.341031  418823 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:54:41.341119  418823 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:54:41.341186  418823 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:54:41.341262  418823 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:54:41.341320  418823 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:54:41.341398  418823 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:54:41.341465  418823 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:54:41.341545  418823 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:54:41.341622  418823 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:54:41.341659  418823 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:54:41.341719  418823 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:54:41.831104  418823 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:54:41.953522  418823 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:54:42.205323  418823 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:54:42.449785  418823 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:54:42.618213  418823 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:54:42.619047  418823 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:54:42.621575  418823 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:54:42.624790  418823 out.go:252]   - Booting up control plane ...
	I1210 07:54:42.624883  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:54:42.624959  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:54:42.625035  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:54:42.639751  418823 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:54:42.639880  418823 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:54:42.648702  418823 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:54:42.648797  418823 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:54:42.648841  418823 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:54:42.779710  418823 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:54:42.779857  418823 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:58:42.778273  418823 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000214333s
	I1210 07:58:42.778318  418823 kubeadm.go:319] 
	I1210 07:58:42.778386  418823 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:58:42.778418  418823 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:58:42.778523  418823 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:58:42.778528  418823 kubeadm.go:319] 
	I1210 07:58:42.778632  418823 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:58:42.778679  418823 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:58:42.778709  418823 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:58:42.778712  418823 kubeadm.go:319] 
	I1210 07:58:42.783355  418823 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:58:42.783807  418823 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:58:42.783918  418823 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:58:42.784153  418823 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:58:42.784159  418823 kubeadm.go:319] 
	I1210 07:58:42.784227  418823 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:58:42.784352  418823 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000214333s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:58:42.784459  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 07:58:43.198112  418823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:58:43.211996  418823 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:58:43.212056  418823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:58:43.219732  418823 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:58:43.219740  418823 kubeadm.go:158] found existing configuration files:
	
	I1210 07:58:43.219791  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 07:58:43.228096  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:58:43.228153  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:58:43.235851  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 07:58:43.244105  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:58:43.244161  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:58:43.252172  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 07:58:43.259776  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:58:43.259838  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:58:43.267182  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 07:58:43.274881  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:58:43.274939  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:58:43.282494  418823 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:58:43.323208  418823 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:58:43.323257  418823 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:58:43.392495  418823 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:58:43.392566  418823 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:58:43.392605  418823 kubeadm.go:319] OS: Linux
	I1210 07:58:43.392653  418823 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:58:43.392700  418823 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:58:43.392753  418823 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:58:43.392806  418823 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:58:43.392856  418823 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:58:43.392902  418823 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:58:43.392950  418823 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:58:43.392997  418823 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:58:43.393041  418823 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:58:43.459397  418823 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:58:43.459500  418823 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:58:43.459594  418823 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:58:43.467473  418823 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:58:43.472849  418823 out.go:252]   - Generating certificates and keys ...
	I1210 07:58:43.472935  418823 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:58:43.472999  418823 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:58:43.473075  418823 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:58:43.473135  418823 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:58:43.473203  418823 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:58:43.473256  418823 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:58:43.473324  418823 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:58:43.473385  418823 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:58:43.474012  418823 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:58:43.474414  418823 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:58:43.474604  418823 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:58:43.474667  418823 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:58:43.690916  418823 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:58:43.922489  418823 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:58:44.055635  418823 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:58:44.187430  418823 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:58:44.228570  418823 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:58:44.229295  418823 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:58:44.233140  418823 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:58:44.236201  418823 out.go:252]   - Booting up control plane ...
	I1210 07:58:44.236295  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:58:44.236371  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:58:44.236933  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:58:44.251863  418823 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:58:44.251964  418823 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:58:44.259287  418823 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:58:44.259598  418823 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:58:44.259801  418823 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:58:44.391514  418823 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:58:44.391627  418823 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 08:02:44.389879  418823 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00019224s
	I1210 08:02:44.389912  418823 kubeadm.go:319] 
	I1210 08:02:44.389980  418823 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 08:02:44.390013  418823 kubeadm.go:319] 	- The kubelet is not running
	I1210 08:02:44.390123  418823 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 08:02:44.390155  418823 kubeadm.go:319] 
	I1210 08:02:44.390271  418823 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 08:02:44.390303  418823 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 08:02:44.390331  418823 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 08:02:44.390335  418823 kubeadm.go:319] 
	I1210 08:02:44.395328  418823 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 08:02:44.395720  418823 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 08:02:44.395823  418823 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 08:02:44.396068  418823 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 08:02:44.396072  418823 kubeadm.go:319] 
	I1210 08:02:44.396138  418823 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 08:02:44.396188  418823 kubeadm.go:403] duration metric: took 12m7.052327562s to StartCluster
	I1210 08:02:44.396219  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:02:44.396280  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:02:44.421374  418823 cri.go:89] found id: ""
	I1210 08:02:44.421389  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.421396  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:02:44.421401  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:02:44.421463  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:02:44.447342  418823 cri.go:89] found id: ""
	I1210 08:02:44.447356  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.447363  418823 logs.go:284] No container was found matching "etcd"
	I1210 08:02:44.447368  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:02:44.447429  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:02:44.472601  418823 cri.go:89] found id: ""
	I1210 08:02:44.472614  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.472621  418823 logs.go:284] No container was found matching "coredns"
	I1210 08:02:44.472627  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:02:44.472684  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:02:44.501973  418823 cri.go:89] found id: ""
	I1210 08:02:44.501986  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.501993  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:02:44.502000  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:02:44.502059  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:02:44.527997  418823 cri.go:89] found id: ""
	I1210 08:02:44.528011  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.528018  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:02:44.528023  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:02:44.528083  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:02:44.558353  418823 cri.go:89] found id: ""
	I1210 08:02:44.558367  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.558374  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:02:44.558379  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:02:44.558439  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:02:44.583751  418823 cri.go:89] found id: ""
	I1210 08:02:44.583764  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.583771  418823 logs.go:284] No container was found matching "kindnet"
	I1210 08:02:44.583780  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 08:02:44.583792  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:02:44.598048  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:02:44.598065  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:02:44.670126  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 08:02:44.661736   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.662310   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.664031   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.664536   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.666240   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 08:02:44.661736   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.662310   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.664031   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.664536   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.666240   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:02:44.670142  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:02:44.670153  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:02:44.741133  418823 logs.go:123] Gathering logs for container status ...
	I1210 08:02:44.741153  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:02:44.768780  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 08:02:44.768797  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 08:02:44.836964  418823 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00019224s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 08:02:44.837011  418823 out.go:285] * 
	W1210 08:02:44.837080  418823 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00019224s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 08:02:44.837155  418823 out.go:285] * 
	W1210 08:02:44.839300  418823 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 08:02:44.844978  418823 out.go:203] 
	W1210 08:02:44.848781  418823 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00019224s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 08:02:44.848820  418823 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 08:02:44.848841  418823 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 08:02:44.852612  418823 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 07:50:35 functional-314220 crio[9896]: time="2025-12-10T07:50:35.928236251Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 10 07:50:35 functional-314220 crio[9896]: time="2025-12-10T07:50:35.928272445Z" level=info msg="Starting seccomp notifier watcher"
	Dec 10 07:50:35 functional-314220 crio[9896]: time="2025-12-10T07:50:35.928313939Z" level=info msg="Create NRI interface"
	Dec 10 07:50:35 functional-314220 crio[9896]: time="2025-12-10T07:50:35.928414264Z" level=info msg="built-in NRI default validator is disabled"
	Dec 10 07:50:35 functional-314220 crio[9896]: time="2025-12-10T07:50:35.928423158Z" level=info msg="runtime interface created"
	Dec 10 07:50:35 functional-314220 crio[9896]: time="2025-12-10T07:50:35.928435031Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 10 07:50:35 functional-314220 crio[9896]: time="2025-12-10T07:50:35.92844098Z" level=info msg="runtime interface starting up..."
	Dec 10 07:50:35 functional-314220 crio[9896]: time="2025-12-10T07:50:35.928453222Z" level=info msg="starting plugins..."
	Dec 10 07:50:35 functional-314220 crio[9896]: time="2025-12-10T07:50:35.928465965Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 07:50:35 functional-314220 crio[9896]: time="2025-12-10T07:50:35.928531492Z" level=info msg="No systemd watchdog enabled"
	Dec 10 07:50:35 functional-314220 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 10 07:54:41 functional-314220 crio[9896]: time="2025-12-10T07:54:41.331601646Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=62979dbb-32a0-43d5-a3b2-a98045dd82da name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:54:41 functional-314220 crio[9896]: time="2025-12-10T07:54:41.332356674Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=882162fe-73f4-4075-9551-d0a546a62bbf name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:54:41 functional-314220 crio[9896]: time="2025-12-10T07:54:41.332837779Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=07883432-643e-4682-a159-ee81c5c97128 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:54:41 functional-314220 crio[9896]: time="2025-12-10T07:54:41.333259733Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=c2e92f0d-1459-497a-8d07-d423bb265c62 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:54:41 functional-314220 crio[9896]: time="2025-12-10T07:54:41.333667081Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0e2e5041-5e30-43e4-8893-355aed834dc7 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:54:41 functional-314220 crio[9896]: time="2025-12-10T07:54:41.334042339Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=16ec4d28-0473-431c-a6c6-f756cd1ed250 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:54:41 functional-314220 crio[9896]: time="2025-12-10T07:54:41.334553221Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=945bf42d-863d-43db-9dbb-1cb7338cdf87 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.462758438Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=30754360-77fc-41d9-961a-703309105bf4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.463612109Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=ce58a9d0-ec5e-41a7-a162-73ed5f175442 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.464131886Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=af48f6b4-c6c8-458a-8d08-3443ae3e881b name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.464662517Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=4191b9f9-c176-40bc-b3bb-ec0edd3076c8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.465135606Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=eaddf230-cd32-4499-a396-5bbd1b1cb31a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.465587147Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=300cd477-ebce-4fed-8c84-bc9781d52848 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.466022016Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=1367fa60-098e-4704-b6f3-b114a75d5405 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 08:02:46.207092   21176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:46.207731   21176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:46.209527   21176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:46.210097   21176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:46.211740   21176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	[Dec10 07:11] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:23] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:25] overlayfs: idmapped layers are currently not supported
	[  +0.065949] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 07:31] overlayfs: idmapped layers are currently not supported
	[Dec10 07:32] overlayfs: idmapped layers are currently not supported
	[Dec10 07:50] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 08:02:46 up  2:45,  0 user,  load average: 0.15, 0.19, 0.48
	Linux functional-314220 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 08:02:43 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:02:44 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 319.
	Dec 10 08:02:44 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:02:44 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:02:44 functional-314220 kubelet[20982]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:02:44 functional-314220 kubelet[20982]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:02:44 functional-314220 kubelet[20982]: E1210 08:02:44.203899   20982 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:02:44 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:02:44 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:02:44 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 10 08:02:44 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:02:44 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:02:44 functional-314220 kubelet[21069]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:02:44 functional-314220 kubelet[21069]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:02:44 functional-314220 kubelet[21069]: E1210 08:02:44.975875   21069 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:02:44 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:02:44 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:02:45 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 10 08:02:45 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:02:45 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:02:45 functional-314220 kubelet[21090]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:02:45 functional-314220 kubelet[21090]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:02:45 functional-314220 kubelet[21090]: E1210 08:02:45.730429   21090 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:02:45 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:02:45 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220: exit status 2 (399.924015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-314220" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (734.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-314220 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-314220 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (60.187135ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-314220 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-314220
helpers_test.go:244: (dbg) docker inspect functional-314220:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	        "Created": "2025-12-10T07:35:53.582567333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 407506,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:35:53.640615938Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hostname",
	        "HostsPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hosts",
	        "LogPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30-json.log",
	        "Name": "/functional-314220",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-314220:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-314220",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	                "LowerDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35-init/diff:/var/lib/docker/overlay2/888a54fd4518421bb2e30bc2f0825f232bffa2f260991e8ba288270662a6554b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-314220",
	                "Source": "/var/lib/docker/volumes/functional-314220/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-314220",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-314220",
	                "name.minikube.sigs.k8s.io": "functional-314220",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a7384df8731cf8cd48ab1dfb3eb79fa2c44cd426e6255cad7345dab2e19d0bfd",
	            "SandboxKey": "/var/run/docker/netns/a7384df8731c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-314220": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:6b:8c:59:cb:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "854df014d6f98ea92f9938b53e3af35a6df7a455941ebb6e97efa575a4d35230",
	                    "EndpointID": "285d153b9d616bc3d729ab764c8a3d3c5b28bb20787fa60a80cbf50869c0c24b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-314220",
	                        "82cb4926192d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220: exit status 2 (293.836411ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-446865 image ls --format yaml --alsologtostderr                                                                                        │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ ssh     │ functional-446865 ssh pgrep buildkitd                                                                                                             │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ image   │ functional-446865 image build -t localhost/my-image:functional-446865 testdata/build --alsologtostderr                                            │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image   │ functional-446865 image ls --format json --alsologtostderr                                                                                        │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image   │ functional-446865 image ls                                                                                                                        │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ image   │ functional-446865 image ls --format table --alsologtostderr                                                                                       │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ delete  │ -p functional-446865                                                                                                                              │ functional-446865 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │ 10 Dec 25 07:35 UTC │
	│ start   │ -p functional-314220 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:35 UTC │                     │
	│ start   │ -p functional-314220 --alsologtostderr -v=8                                                                                                       │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:44 UTC │                     │
	│ cache   │ functional-314220 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ functional-314220 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ functional-314220 cache add registry.k8s.io/pause:latest                                                                                          │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ functional-314220 cache add minikube-local-cache-test:functional-314220                                                                           │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ functional-314220 cache delete minikube-local-cache-test:functional-314220                                                                        │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ ssh     │ functional-314220 ssh sudo crictl images                                                                                                          │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ ssh     │ functional-314220 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ ssh     │ functional-314220 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │                     │
	│ cache   │ functional-314220 cache reload                                                                                                                    │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ ssh     │ functional-314220 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ kubectl │ functional-314220 kubectl -- --context functional-314220 get pods                                                                                 │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │                     │
	│ start   │ -p functional-314220 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:50:32
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:50:32.899349  418823 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:50:32.899467  418823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:50:32.899470  418823 out.go:374] Setting ErrFile to fd 2...
	I1210 07:50:32.899475  418823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:50:32.899728  418823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:50:32.900077  418823 out.go:368] Setting JSON to false
	I1210 07:50:32.900875  418823 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9183,"bootTime":1765343850,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:50:32.900927  418823 start.go:143] virtualization:  
	I1210 07:50:32.904391  418823 out.go:179] * [functional-314220] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:50:32.909970  418823 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:50:32.910062  418823 notify.go:221] Checking for updates...
	I1210 07:50:32.913755  418823 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:50:32.917032  418823 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:50:32.919882  418823 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 07:50:32.922630  418823 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:50:32.926514  418823 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:50:32.929831  418823 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:50:32.929952  418823 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:50:32.973254  418823 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:50:32.973375  418823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:50:33.030281  418823 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 07:50:33.020639734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:50:33.030378  418823 docker.go:319] overlay module found
	I1210 07:50:33.033510  418823 out.go:179] * Using the docker driver based on existing profile
	I1210 07:50:33.036367  418823 start.go:309] selected driver: docker
	I1210 07:50:33.036393  418823 start.go:927] validating driver "docker" against &{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:50:33.036475  418823 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:50:33.036573  418823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:50:33.101667  418823 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 07:50:33.09179395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:50:33.102098  418823 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:50:33.102120  418823 cni.go:84] Creating CNI manager for ""
	I1210 07:50:33.102171  418823 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:50:33.102212  418823 start.go:353] cluster config:
	{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:50:33.107143  418823 out.go:179] * Starting "functional-314220" primary control-plane node in "functional-314220" cluster
	I1210 07:50:33.110125  418823 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 07:50:33.113004  418823 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:50:33.115816  418823 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:50:33.115854  418823 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1210 07:50:33.115862  418823 cache.go:65] Caching tarball of preloaded images
	I1210 07:50:33.115956  418823 preload.go:238] Found /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1210 07:50:33.115966  418823 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 07:50:33.115961  418823 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:50:33.116084  418823 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/config.json ...
	I1210 07:50:33.135517  418823 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:50:33.135528  418823 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:50:33.135548  418823 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:50:33.135579  418823 start.go:360] acquireMachinesLock for functional-314220: {Name:mk24b69a1578b6f8eb2fecd1b5e1ddb99787d8b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:50:33.135644  418823 start.go:364] duration metric: took 47.935µs to acquireMachinesLock for "functional-314220"
	I1210 07:50:33.135662  418823 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:50:33.135667  418823 fix.go:54] fixHost starting: 
	I1210 07:50:33.135928  418823 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:50:33.153142  418823 fix.go:112] recreateIfNeeded on functional-314220: state=Running err=<nil>
	W1210 07:50:33.153176  418823 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:50:33.156510  418823 out.go:252] * Updating the running docker "functional-314220" container ...
	I1210 07:50:33.156542  418823 machine.go:94] provisionDockerMachine start ...
	I1210 07:50:33.156629  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:33.173363  418823 main.go:143] libmachine: Using SSH client type: native
	I1210 07:50:33.173679  418823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:50:33.173685  418823 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:50:33.306701  418823 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-314220
	
	I1210 07:50:33.306715  418823 ubuntu.go:182] provisioning hostname "functional-314220"
	I1210 07:50:33.306784  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:33.323402  418823 main.go:143] libmachine: Using SSH client type: native
	I1210 07:50:33.323703  418823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:50:33.323711  418823 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-314220 && echo "functional-314220" | sudo tee /etc/hostname
	I1210 07:50:33.463802  418823 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-314220
	
	I1210 07:50:33.463873  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:33.481663  418823 main.go:143] libmachine: Using SSH client type: native
	I1210 07:50:33.481979  418823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:50:33.481993  418823 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-314220' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-314220/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-314220' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:50:33.615371  418823 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:50:33.615387  418823 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-376671/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-376671/.minikube}
	I1210 07:50:33.615415  418823 ubuntu.go:190] setting up certificates
	I1210 07:50:33.615424  418823 provision.go:84] configureAuth start
	I1210 07:50:33.615481  418823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:50:33.633344  418823 provision.go:143] copyHostCerts
	I1210 07:50:33.633409  418823 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem, removing ...
	I1210 07:50:33.633416  418823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem
	I1210 07:50:33.633490  418823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem (1082 bytes)
	I1210 07:50:33.633597  418823 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem, removing ...
	I1210 07:50:33.633601  418823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem
	I1210 07:50:33.633627  418823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem (1123 bytes)
	I1210 07:50:33.633685  418823 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem, removing ...
	I1210 07:50:33.633688  418823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem
	I1210 07:50:33.633710  418823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem (1675 bytes)
	I1210 07:50:33.633815  418823 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem org=jenkins.functional-314220 san=[127.0.0.1 192.168.49.2 functional-314220 localhost minikube]
	I1210 07:50:33.839628  418823 provision.go:177] copyRemoteCerts
	I1210 07:50:33.839683  418823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:50:33.839721  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:33.857491  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:33.954662  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:50:33.972200  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:50:33.989946  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:50:34.010600  418823 provision.go:87] duration metric: took 395.152109ms to configureAuth
	I1210 07:50:34.010620  418823 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:50:34.010837  418823 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:50:34.010945  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.031319  418823 main.go:143] libmachine: Using SSH client type: native
	I1210 07:50:34.031635  418823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:50:34.031646  418823 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 07:50:34.394456  418823 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 07:50:34.394468  418823 machine.go:97] duration metric: took 1.237919377s to provisionDockerMachine
	I1210 07:50:34.394480  418823 start.go:293] postStartSetup for "functional-314220" (driver="docker")
	I1210 07:50:34.394492  418823 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:50:34.394553  418823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:50:34.394594  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.425725  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:34.527110  418823 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:50:34.530555  418823 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:50:34.530572  418823 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:50:34.530582  418823 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/addons for local assets ...
	I1210 07:50:34.530636  418823 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/files for local assets ...
	I1210 07:50:34.530720  418823 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> 3785282.pem in /etc/ssl/certs
	I1210 07:50:34.530798  418823 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts -> hosts in /etc/test/nested/copy/378528
	I1210 07:50:34.530841  418823 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/378528
	I1210 07:50:34.538245  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 07:50:34.555946  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts --> /etc/test/nested/copy/378528/hosts (40 bytes)
	I1210 07:50:34.573402  418823 start.go:296] duration metric: took 178.908422ms for postStartSetup
	I1210 07:50:34.573478  418823 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:50:34.573515  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.591144  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:34.684092  418823 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:50:34.688828  418823 fix.go:56] duration metric: took 1.553153828s for fixHost
	I1210 07:50:34.688843  418823 start.go:83] releasing machines lock for "functional-314220", held for 1.553192081s
	I1210 07:50:34.688922  418823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:50:34.705960  418823 ssh_runner.go:195] Run: cat /version.json
	I1210 07:50:34.705982  418823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:50:34.706002  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.706033  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.724227  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:34.734363  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:34.905519  418823 ssh_runner.go:195] Run: systemctl --version
	I1210 07:50:34.911896  418823 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 07:50:34.947949  418823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:50:34.952265  418823 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:50:34.952348  418823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:50:34.960087  418823 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:50:34.960100  418823 start.go:496] detecting cgroup driver to use...
	I1210 07:50:34.960131  418823 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:50:34.960194  418823 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:50:34.975734  418823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:50:34.988235  418823 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:50:34.988306  418823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:50:35.008024  418823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:50:35.023507  418823 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:50:35.140776  418823 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:50:35.287143  418823 docker.go:234] disabling docker service ...
	I1210 07:50:35.287205  418823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:50:35.302191  418823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:50:35.316045  418823 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:50:35.435977  418823 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:50:35.558581  418823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:50:35.570905  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:50:35.584271  418823 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 07:50:35.584341  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.593128  418823 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 07:50:35.593191  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.602242  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.611204  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.619936  418823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:50:35.627869  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.636843  418823 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.645059  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.653527  418823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:50:35.660914  418823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:50:35.668098  418823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:50:35.785150  418823 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 07:50:35.938526  418823 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 07:50:35.938594  418823 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 07:50:35.943564  418823 start.go:564] Will wait 60s for crictl version
	I1210 07:50:35.943634  418823 ssh_runner.go:195] Run: which crictl
	I1210 07:50:35.950126  418823 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:50:35.976476  418823 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 07:50:35.976565  418823 ssh_runner.go:195] Run: crio --version
	I1210 07:50:36.013250  418823 ssh_runner.go:195] Run: crio --version
	I1210 07:50:36.049514  418823 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 07:50:36.052392  418823 cli_runner.go:164] Run: docker network inspect functional-314220 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:50:36.073467  418823 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 07:50:36.080871  418823 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1210 07:50:36.083861  418823 kubeadm.go:884] updating cluster {Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:50:36.084003  418823 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:50:36.084083  418823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:50:36.122033  418823 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:50:36.122045  418823 crio.go:433] Images already preloaded, skipping extraction
	I1210 07:50:36.122104  418823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:50:36.147981  418823 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:50:36.147994  418823 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:50:36.148000  418823 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1210 07:50:36.148093  418823 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-314220 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:50:36.148179  418823 ssh_runner.go:195] Run: crio config
	I1210 07:50:36.223557  418823 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1210 07:50:36.223582  418823 cni.go:84] Creating CNI manager for ""
	I1210 07:50:36.223591  418823 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:50:36.223605  418823 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:50:36.223627  418823 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-314220 NodeName:functional-314220 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:50:36.223742  418823 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-314220"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:50:36.223809  418823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:50:36.231667  418823 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:50:36.231750  418823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:50:36.239592  418823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 07:50:36.252574  418823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:50:36.265349  418823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1210 07:50:36.278251  418823 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:50:36.281864  418823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:50:36.395980  418823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:50:36.662807  418823 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220 for IP: 192.168.49.2
	I1210 07:50:36.662818  418823 certs.go:195] generating shared ca certs ...
	I1210 07:50:36.662833  418823 certs.go:227] acquiring lock for ca certs: {Name:mk8f0c12407c084bd20b81428ac0dabca9a8cbd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:50:36.662974  418823 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key
	I1210 07:50:36.663036  418823 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key
	I1210 07:50:36.663044  418823 certs.go:257] generating profile certs ...
	I1210 07:50:36.663128  418823 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key
	I1210 07:50:36.663184  418823 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key.8ae59347
	I1210 07:50:36.663221  418823 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key
	I1210 07:50:36.663326  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem (1338 bytes)
	W1210 07:50:36.663359  418823 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528_empty.pem, impossibly tiny 0 bytes
	I1210 07:50:36.663370  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:50:36.663396  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:50:36.663419  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:50:36.663444  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem (1675 bytes)
	I1210 07:50:36.663487  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 07:50:36.664085  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:50:36.684901  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 07:50:36.704871  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:50:36.724001  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:50:36.742252  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:50:36.759395  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:50:36.776213  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:50:36.793265  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:50:36.810512  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /usr/share/ca-certificates/3785282.pem (1708 bytes)
	I1210 07:50:36.828353  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:50:36.845515  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem --> /usr/share/ca-certificates/378528.pem (1338 bytes)
	I1210 07:50:36.862765  418823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:50:36.875122  418823 ssh_runner.go:195] Run: openssl version
	I1210 07:50:36.881447  418823 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3785282.pem
	I1210 07:50:36.888818  418823 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3785282.pem /etc/ssl/certs/3785282.pem
	I1210 07:50:36.896054  418823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3785282.pem
	I1210 07:50:36.899817  418823 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 07:35 /usr/share/ca-certificates/3785282.pem
	I1210 07:50:36.899876  418823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3785282.pem
	I1210 07:50:36.940839  418823 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:50:36.948274  418823 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:50:36.955506  418823 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:50:36.963139  418823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:50:36.966818  418823 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 07:25 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:50:36.966873  418823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:50:37.008344  418823 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:50:37.018542  418823 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/378528.pem
	I1210 07:50:37.028848  418823 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/378528.pem /etc/ssl/certs/378528.pem
	I1210 07:50:37.037787  418823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378528.pem
	I1210 07:50:37.041789  418823 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 07:35 /usr/share/ca-certificates/378528.pem
	I1210 07:50:37.041883  418823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378528.pem
	I1210 07:50:37.083088  418823 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:50:37.090399  418823 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:50:37.093984  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:50:37.134711  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:50:37.175584  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:50:37.216322  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:50:37.258210  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:50:37.300727  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:50:37.343870  418823 kubeadm.go:401] StartCluster: {Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:50:37.343957  418823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:50:37.344031  418823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:50:37.373693  418823 cri.go:89] found id: ""
	I1210 07:50:37.373755  418823 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:50:37.382429  418823 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:50:37.382439  418823 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:50:37.382493  418823 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:50:37.389449  418823 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:50:37.389979  418823 kubeconfig.go:125] found "functional-314220" server: "https://192.168.49.2:8441"
	I1210 07:50:37.391548  418823 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:50:37.399103  418823 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 07:36:02.271715799 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 07:50:36.273283366 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1210 07:50:37.399128  418823 kubeadm.go:1161] stopping kube-system containers ...
	I1210 07:50:37.399140  418823 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 07:50:37.399196  418823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:50:37.434614  418823 cri.go:89] found id: ""
	I1210 07:50:37.434674  418823 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 07:50:37.455844  418823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:50:37.463706  418823 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 10 07:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 10 07:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 10 07:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 10 07:40 /etc/kubernetes/scheduler.conf
	
	I1210 07:50:37.463780  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 07:50:37.471472  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 07:50:37.478782  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:50:37.478837  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:50:37.486355  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 07:50:37.493976  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:50:37.494040  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:50:37.501640  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 07:50:37.509588  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:50:37.509645  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:50:37.517276  418823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:50:37.525049  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:37.571686  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:39.573879  418823 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.002165526s)
	I1210 07:50:39.573940  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:39.780126  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:39.857417  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:39.903067  418823 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:50:39.903139  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:40.403973  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:40.903804  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:41.403355  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:41.904207  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:42.404057  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:42.903818  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:43.404234  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:43.904036  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:44.403331  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:44.904250  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:45.404168  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:45.904093  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:46.404204  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:46.904144  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:47.404213  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:47.903250  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:48.404144  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:48.904262  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:49.404011  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:49.903321  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:50.403990  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:50.903998  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:51.403914  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:51.903990  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:52.403942  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:52.903796  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:53.403576  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:53.903966  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:54.403314  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:54.904147  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:55.404245  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:55.903953  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:56.403340  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:56.904274  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:57.404124  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:57.903801  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:58.404241  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:58.903869  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:59.403287  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:59.903954  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:00.403352  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:00.904043  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:01.403894  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:01.903648  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:02.404219  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:02.903678  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:03.403948  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:03.904224  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:04.403353  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:04.904036  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:05.404217  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:05.903272  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:06.404216  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:06.903390  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:07.403379  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:07.903804  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:08.404215  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:08.904228  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:09.404143  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:09.904284  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:10.403331  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:10.904097  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:11.404225  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:11.903848  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:12.403282  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:12.903360  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:13.403955  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:13.903329  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:14.404081  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:14.903215  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:15.403223  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:15.903728  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:16.403337  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:16.904035  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:17.403389  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:17.904062  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:18.403915  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:18.903844  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:19.403287  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:19.903456  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:20.403269  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:20.903919  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:21.403294  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:21.903959  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:22.403330  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:22.903425  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:23.403210  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:23.904289  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:24.403468  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:24.903578  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:25.403340  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:25.903276  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:26.404241  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:26.903945  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:27.404152  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:27.903337  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:28.404037  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:28.903401  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:29.403321  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:29.904015  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:30.403353  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:30.904231  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.403897  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.903428  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:32.404285  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:32.904059  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:33.403419  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:33.903340  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:34.404109  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:34.903323  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:35.404151  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:35.903331  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.403229  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.904295  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:37.403231  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:37.904159  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:38.403982  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:38.903898  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:39.403315  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:39.903344  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:39.903423  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:39.933715  418823 cri.go:89] found id: ""
	I1210 07:51:39.933730  418823 logs.go:282] 0 containers: []
	W1210 07:51:39.933737  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:39.933741  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:39.933807  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:39.959343  418823 cri.go:89] found id: ""
	I1210 07:51:39.959358  418823 logs.go:282] 0 containers: []
	W1210 07:51:39.959366  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:39.959371  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:39.959428  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:39.985280  418823 cri.go:89] found id: ""
	I1210 07:51:39.985294  418823 logs.go:282] 0 containers: []
	W1210 07:51:39.985302  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:39.985307  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:39.985366  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:40.021888  418823 cri.go:89] found id: ""
	I1210 07:51:40.021904  418823 logs.go:282] 0 containers: []
	W1210 07:51:40.021912  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:40.021917  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:40.022019  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:40.050222  418823 cri.go:89] found id: ""
	I1210 07:51:40.050238  418823 logs.go:282] 0 containers: []
	W1210 07:51:40.050245  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:40.050251  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:40.050314  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:40.076513  418823 cri.go:89] found id: ""
	I1210 07:51:40.076528  418823 logs.go:282] 0 containers: []
	W1210 07:51:40.076536  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:40.076541  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:40.076603  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:40.106190  418823 cri.go:89] found id: ""
	I1210 07:51:40.106206  418823 logs.go:282] 0 containers: []
	W1210 07:51:40.106213  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:40.106221  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:40.106232  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:40.171760  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:40.171781  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:40.188577  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:40.188594  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:40.259869  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:40.250864   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.251869   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.253556   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.254191   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.255824   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:40.250864   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.251869   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.253556   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.254191   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.255824   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:40.259893  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:40.259905  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:40.330751  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:40.330772  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:42.864666  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:42.875209  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:42.875278  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:42.906775  418823 cri.go:89] found id: ""
	I1210 07:51:42.906788  418823 logs.go:282] 0 containers: []
	W1210 07:51:42.906796  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:42.906802  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:42.906860  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:42.932120  418823 cri.go:89] found id: ""
	I1210 07:51:42.932134  418823 logs.go:282] 0 containers: []
	W1210 07:51:42.932142  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:42.932147  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:42.932207  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:42.960769  418823 cri.go:89] found id: ""
	I1210 07:51:42.960784  418823 logs.go:282] 0 containers: []
	W1210 07:51:42.960793  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:42.960798  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:42.960857  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:42.986269  418823 cri.go:89] found id: ""
	I1210 07:51:42.986285  418823 logs.go:282] 0 containers: []
	W1210 07:51:42.986294  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:42.986299  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:42.986361  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:43.021139  418823 cri.go:89] found id: ""
	I1210 07:51:43.021155  418823 logs.go:282] 0 containers: []
	W1210 07:51:43.021163  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:43.021168  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:43.021241  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:43.047486  418823 cri.go:89] found id: ""
	I1210 07:51:43.047501  418823 logs.go:282] 0 containers: []
	W1210 07:51:43.047508  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:43.047513  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:43.047576  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:43.073233  418823 cri.go:89] found id: ""
	I1210 07:51:43.073247  418823 logs.go:282] 0 containers: []
	W1210 07:51:43.073255  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:43.073263  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:43.073273  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:43.139078  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:43.139105  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:43.153579  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:43.153595  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:43.240938  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:43.232595   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.233028   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.234681   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.235233   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.236893   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:43.232595   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.233028   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.234681   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.235233   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.236893   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:43.240958  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:43.240970  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:43.308772  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:43.308794  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:45.841619  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:45.852276  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:45.852345  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:45.887199  418823 cri.go:89] found id: ""
	I1210 07:51:45.887215  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.887222  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:45.887237  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:45.887324  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:45.918859  418823 cri.go:89] found id: ""
	I1210 07:51:45.918873  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.918880  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:45.918885  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:45.918944  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:45.943991  418823 cri.go:89] found id: ""
	I1210 07:51:45.944006  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.944014  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:45.944019  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:45.944088  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:45.970351  418823 cri.go:89] found id: ""
	I1210 07:51:45.970371  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.970379  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:45.970384  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:45.970444  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:45.995587  418823 cri.go:89] found id: ""
	I1210 07:51:45.995601  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.995609  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:45.995614  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:45.995678  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:46.023570  418823 cri.go:89] found id: ""
	I1210 07:51:46.023586  418823 logs.go:282] 0 containers: []
	W1210 07:51:46.023593  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:46.023599  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:46.023660  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:46.056294  418823 cri.go:89] found id: ""
	I1210 07:51:46.056309  418823 logs.go:282] 0 containers: []
	W1210 07:51:46.056317  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:46.056325  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:46.056336  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:46.125021  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:46.125041  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:46.139709  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:46.139728  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:46.233096  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:46.221808   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.223333   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.227401   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.227730   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.229200   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:46.221808   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.223333   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.227401   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.227730   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.229200   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:46.233116  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:46.233127  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:46.302440  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:46.302460  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:48.833091  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:48.843740  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:48.843804  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:48.869041  418823 cri.go:89] found id: ""
	I1210 07:51:48.869057  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.869064  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:48.869070  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:48.869139  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:48.893750  418823 cri.go:89] found id: ""
	I1210 07:51:48.893765  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.893784  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:48.893790  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:48.893850  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:48.919315  418823 cri.go:89] found id: ""
	I1210 07:51:48.919330  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.919337  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:48.919343  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:48.919413  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:48.944091  418823 cri.go:89] found id: ""
	I1210 07:51:48.944107  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.944114  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:48.944120  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:48.944178  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:48.968980  418823 cri.go:89] found id: ""
	I1210 07:51:48.968995  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.969002  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:48.969007  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:48.969066  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:48.994258  418823 cri.go:89] found id: ""
	I1210 07:51:48.994272  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.994279  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:48.994294  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:48.994354  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:49.021988  418823 cri.go:89] found id: ""
	I1210 07:51:49.022004  418823 logs.go:282] 0 containers: []
	W1210 07:51:49.022012  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:49.022019  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:49.022029  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:49.089579  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:49.089605  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:49.118629  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:49.118648  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:49.191180  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:49.191204  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:49.208309  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:49.208325  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:49.273461  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:49.264556   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.265266   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.266981   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.267634   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.269070   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:49.264556   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.265266   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.266981   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.267634   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.269070   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:51.775168  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:51.785506  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:51.785567  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:51.810828  418823 cri.go:89] found id: ""
	I1210 07:51:51.810843  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.810860  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:51.810865  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:51.810926  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:51.835270  418823 cri.go:89] found id: ""
	I1210 07:51:51.835285  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.835292  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:51.835297  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:51.835357  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:51.862106  418823 cri.go:89] found id: ""
	I1210 07:51:51.862121  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.862129  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:51.862134  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:51.862203  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:51.887726  418823 cri.go:89] found id: ""
	I1210 07:51:51.887741  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.887749  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:51.887754  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:51.887816  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:51.916383  418823 cri.go:89] found id: ""
	I1210 07:51:51.916398  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.916405  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:51.916409  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:51.916479  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:51.945251  418823 cri.go:89] found id: ""
	I1210 07:51:51.945266  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.945273  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:51.945278  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:51.945337  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:51.970333  418823 cri.go:89] found id: ""
	I1210 07:51:51.970348  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.970357  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:51.970365  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:51.970385  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:51.998969  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:51.998986  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:52.071390  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:52.071420  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:52.087389  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:52.087406  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:52.154961  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:52.145998   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.146914   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.148561   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.149089   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.150792   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:52.145998   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.146914   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.148561   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.149089   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.150792   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:52.154973  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:52.154985  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:54.734714  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:54.745090  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:54.745151  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:54.770064  418823 cri.go:89] found id: ""
	I1210 07:51:54.770079  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.770086  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:54.770091  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:54.770149  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:54.796152  418823 cri.go:89] found id: ""
	I1210 07:51:54.796167  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.796174  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:54.796179  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:54.796241  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:54.822080  418823 cri.go:89] found id: ""
	I1210 07:51:54.822095  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.822102  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:54.822107  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:54.822175  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:54.849868  418823 cri.go:89] found id: ""
	I1210 07:51:54.849883  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.849891  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:54.849895  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:54.849951  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:54.875726  418823 cri.go:89] found id: ""
	I1210 07:51:54.875741  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.875748  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:54.875753  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:54.875815  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:54.905509  418823 cri.go:89] found id: ""
	I1210 07:51:54.905524  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.905531  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:54.905536  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:54.905595  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:54.931115  418823 cri.go:89] found id: ""
	I1210 07:51:54.931138  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.931146  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:54.931154  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:54.931164  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:54.997885  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:54.997906  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:55.030067  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:55.030094  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:55.099098  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:55.099116  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:55.113912  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:55.113934  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:55.200955  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:55.191641   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.192478   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.194192   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.195162   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.196812   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:55.191641   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.192478   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.194192   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.195162   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.196812   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:57.701770  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:57.712296  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:57.712359  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:57.742200  418823 cri.go:89] found id: ""
	I1210 07:51:57.742217  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.742225  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:57.742230  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:57.742288  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:57.770042  418823 cri.go:89] found id: ""
	I1210 07:51:57.770056  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.770063  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:57.770068  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:57.770126  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:57.795451  418823 cri.go:89] found id: ""
	I1210 07:51:57.795464  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.795471  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:57.795477  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:57.795536  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:57.823068  418823 cri.go:89] found id: ""
	I1210 07:51:57.823084  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.823091  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:57.823097  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:57.823160  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:57.849968  418823 cri.go:89] found id: ""
	I1210 07:51:57.849982  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.849998  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:57.850003  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:57.850064  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:57.877868  418823 cri.go:89] found id: ""
	I1210 07:51:57.877881  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.877889  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:57.877894  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:57.877954  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:57.903803  418823 cri.go:89] found id: ""
	I1210 07:51:57.903823  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.903830  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:57.903838  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:57.903849  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:57.970812  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:57.970831  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:57.985765  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:57.985786  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:58.070052  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:58.061598   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.062333   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.063943   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.064517   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.066053   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:58.061598   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.062333   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.063943   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.064517   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.066053   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:58.070062  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:58.070076  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:58.138971  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:58.138993  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:00.678904  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:00.689904  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:00.689965  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:00.717867  418823 cri.go:89] found id: ""
	I1210 07:52:00.717882  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.717889  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:00.717895  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:00.717960  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:00.746728  418823 cri.go:89] found id: ""
	I1210 07:52:00.746743  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.746750  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:00.746755  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:00.746815  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:00.771995  418823 cri.go:89] found id: ""
	I1210 07:52:00.772009  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.772016  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:00.772021  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:00.772084  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:00.801311  418823 cri.go:89] found id: ""
	I1210 07:52:00.801326  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.801333  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:00.801338  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:00.801400  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:00.827977  418823 cri.go:89] found id: ""
	I1210 07:52:00.827992  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.827999  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:00.828004  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:00.828064  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:00.857640  418823 cri.go:89] found id: ""
	I1210 07:52:00.857653  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.857661  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:00.857666  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:00.857723  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:00.886162  418823 cri.go:89] found id: ""
	I1210 07:52:00.886176  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.886183  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:00.886192  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:00.886203  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:00.900682  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:00.900699  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:00.962996  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:00.954850   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.955457   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.957002   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.957542   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.959068   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:00.954850   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.955457   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.957002   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.957542   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.959068   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:00.963006  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:00.963044  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:01.030923  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:01.030945  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:01.064661  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:01.064678  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:03.634114  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:03.644373  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:03.644437  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:03.670228  418823 cri.go:89] found id: ""
	I1210 07:52:03.670242  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.670250  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:03.670255  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:03.670313  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:03.697715  418823 cri.go:89] found id: ""
	I1210 07:52:03.697730  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.697737  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:03.697742  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:03.697800  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:03.725317  418823 cri.go:89] found id: ""
	I1210 07:52:03.725331  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.725338  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:03.725344  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:03.725406  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:03.754932  418823 cri.go:89] found id: ""
	I1210 07:52:03.754947  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.754954  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:03.754959  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:03.755055  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:03.781710  418823 cri.go:89] found id: ""
	I1210 07:52:03.781724  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.781731  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:03.781736  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:03.781799  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:03.806748  418823 cri.go:89] found id: ""
	I1210 07:52:03.806761  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.806769  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:03.806773  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:03.806839  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:03.831941  418823 cri.go:89] found id: ""
	I1210 07:52:03.831956  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.831963  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:03.831970  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:03.831980  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:03.893889  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:03.885770   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.886207   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.887763   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.888077   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.889552   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:03.885770   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.886207   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.887763   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.888077   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.889552   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:03.893899  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:03.893910  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:03.963740  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:03.963762  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:03.994617  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:03.994633  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:04.064848  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:04.064869  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:06.580763  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:06.590814  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:06.590876  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:06.617862  418823 cri.go:89] found id: ""
	I1210 07:52:06.617877  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.617884  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:06.617889  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:06.617952  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:06.642344  418823 cri.go:89] found id: ""
	I1210 07:52:06.642364  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.642372  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:06.642376  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:06.642434  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:06.668168  418823 cri.go:89] found id: ""
	I1210 07:52:06.668181  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.668189  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:06.668194  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:06.668252  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:06.693569  418823 cri.go:89] found id: ""
	I1210 07:52:06.693584  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.693591  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:06.693596  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:06.693655  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:06.719248  418823 cri.go:89] found id: ""
	I1210 07:52:06.719272  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.719281  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:06.719286  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:06.719353  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:06.744269  418823 cri.go:89] found id: ""
	I1210 07:52:06.744298  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.744306  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:06.744311  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:06.744384  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:06.769456  418823 cri.go:89] found id: ""
	I1210 07:52:06.769485  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.769493  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:06.769501  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:06.769520  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:06.835122  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:06.826477   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.827191   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.828919   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.829490   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.831268   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:06.826477   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.827191   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.828919   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.829490   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.831268   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:06.835134  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:06.835145  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:06.903874  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:06.903896  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:06.932245  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:06.932261  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:06.999686  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:06.999707  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:09.516631  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:09.527151  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:09.527214  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:09.553162  418823 cri.go:89] found id: ""
	I1210 07:52:09.553175  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.553182  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:09.553187  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:09.553248  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:09.577770  418823 cri.go:89] found id: ""
	I1210 07:52:09.577785  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.577792  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:09.577797  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:09.577857  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:09.603741  418823 cri.go:89] found id: ""
	I1210 07:52:09.603755  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.603765  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:09.603770  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:09.603830  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:09.631507  418823 cri.go:89] found id: ""
	I1210 07:52:09.631521  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.631529  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:09.631534  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:09.631597  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:09.657315  418823 cri.go:89] found id: ""
	I1210 07:52:09.657329  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.657342  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:09.657347  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:09.657406  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:09.682591  418823 cri.go:89] found id: ""
	I1210 07:52:09.682606  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.682613  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:09.682619  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:09.682677  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:09.708020  418823 cri.go:89] found id: ""
	I1210 07:52:09.708034  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.708042  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:09.708049  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:09.708062  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:09.777964  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:09.777985  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:09.792349  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:09.792367  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:09.854411  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:09.846045   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.846788   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.848379   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.848685   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.850192   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:09.846045   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.846788   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.848379   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.848685   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.850192   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:09.854421  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:09.854434  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:09.922233  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:09.922255  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:12.457145  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:12.468643  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:12.468721  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:12.494760  418823 cri.go:89] found id: ""
	I1210 07:52:12.494774  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.494782  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:12.494787  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:12.494853  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:12.520639  418823 cri.go:89] found id: ""
	I1210 07:52:12.520653  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.520673  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:12.520678  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:12.520738  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:12.546812  418823 cri.go:89] found id: ""
	I1210 07:52:12.546827  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.546834  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:12.546839  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:12.546899  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:12.573531  418823 cri.go:89] found id: ""
	I1210 07:52:12.573546  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.573553  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:12.573558  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:12.573623  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:12.600389  418823 cri.go:89] found id: ""
	I1210 07:52:12.600403  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.600411  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:12.600416  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:12.600475  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:12.630232  418823 cri.go:89] found id: ""
	I1210 07:52:12.630257  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.630265  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:12.630271  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:12.630340  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:12.656013  418823 cri.go:89] found id: ""
	I1210 07:52:12.656027  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.656035  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:12.656042  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:12.656058  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:12.727638  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:12.727667  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:12.742877  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:12.742895  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:12.807790  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:12.799348   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.800103   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.801856   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.802374   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.803909   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:12.799348   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.800103   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.801856   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.802374   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.803909   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:12.807802  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:12.807814  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:12.876103  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:12.876124  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:15.409499  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:15.424003  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:15.424080  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:15.458307  418823 cri.go:89] found id: ""
	I1210 07:52:15.458341  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.458348  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:15.458353  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:15.458428  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:15.488619  418823 cri.go:89] found id: ""
	I1210 07:52:15.488634  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.488641  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:15.488646  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:15.488709  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:15.513795  418823 cri.go:89] found id: ""
	I1210 07:52:15.513809  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.513817  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:15.513831  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:15.513888  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:15.539219  418823 cri.go:89] found id: ""
	I1210 07:52:15.539233  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.539240  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:15.539245  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:15.539305  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:15.565461  418823 cri.go:89] found id: ""
	I1210 07:52:15.565475  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.565490  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:15.565495  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:15.565554  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:15.597327  418823 cri.go:89] found id: ""
	I1210 07:52:15.597341  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.597348  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:15.597354  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:15.597412  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:15.622974  418823 cri.go:89] found id: ""
	I1210 07:52:15.622994  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.623001  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:15.623047  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:15.623059  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:15.690204  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:15.681087   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.681887   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.683667   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.684195   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.685805   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:15.681087   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.681887   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.683667   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.684195   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.685805   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:15.690215  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:15.690226  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:15.758230  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:15.758252  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:15.788867  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:15.788884  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:15.856134  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:15.856154  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:18.371925  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:18.382408  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:18.382482  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:18.408893  418823 cri.go:89] found id: ""
	I1210 07:52:18.408907  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.408914  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:18.408919  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:18.408994  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:18.444341  418823 cri.go:89] found id: ""
	I1210 07:52:18.444355  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.444374  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:18.444380  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:18.444450  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:18.476809  418823 cri.go:89] found id: ""
	I1210 07:52:18.476823  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.476830  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:18.476835  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:18.476892  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:18.503052  418823 cri.go:89] found id: ""
	I1210 07:52:18.503066  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.503073  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:18.503078  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:18.503150  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:18.529967  418823 cri.go:89] found id: ""
	I1210 07:52:18.529981  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.529998  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:18.530003  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:18.530095  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:18.555604  418823 cri.go:89] found id: ""
	I1210 07:52:18.555619  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.555626  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:18.555631  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:18.555692  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:18.580758  418823 cri.go:89] found id: ""
	I1210 07:52:18.580773  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.580781  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:18.580789  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:18.580803  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:18.649536  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:18.641471   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.642112   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.643861   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.644377   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.645509   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:18.641471   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.642112   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.643861   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.644377   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.645509   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:18.649546  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:18.649558  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:18.720152  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:18.720174  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:18.749804  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:18.749823  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:18.819943  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:18.819965  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:21.337138  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:21.347127  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:21.347189  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:21.373895  418823 cri.go:89] found id: ""
	I1210 07:52:21.373918  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.373926  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:21.373931  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:21.373998  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:21.399869  418823 cri.go:89] found id: ""
	I1210 07:52:21.399896  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.399903  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:21.399908  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:21.399979  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:21.427202  418823 cri.go:89] found id: ""
	I1210 07:52:21.427219  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.427226  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:21.427231  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:21.427299  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:21.458325  418823 cri.go:89] found id: ""
	I1210 07:52:21.458348  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.458355  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:21.458360  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:21.458429  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:21.488232  418823 cri.go:89] found id: ""
	I1210 07:52:21.488246  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.488253  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:21.488259  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:21.488318  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:21.523678  418823 cri.go:89] found id: ""
	I1210 07:52:21.523693  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.523700  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:21.523706  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:21.523774  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:21.554053  418823 cri.go:89] found id: ""
	I1210 07:52:21.554068  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.554076  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:21.554084  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:21.554094  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:21.584626  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:21.584643  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:21.650495  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:21.650516  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:21.665376  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:21.665393  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:21.728186  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:21.719729   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.720322   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.722158   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.722717   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.724385   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:21.719729   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.720322   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.722158   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.722717   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.724385   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:21.728197  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:21.728210  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:24.296826  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:24.306876  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:24.306941  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:24.331566  418823 cri.go:89] found id: ""
	I1210 07:52:24.331580  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.331587  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:24.331592  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:24.331654  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:24.364290  418823 cri.go:89] found id: ""
	I1210 07:52:24.364304  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.364312  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:24.364317  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:24.364375  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:24.394840  418823 cri.go:89] found id: ""
	I1210 07:52:24.394855  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.394863  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:24.394871  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:24.394927  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:24.423155  418823 cri.go:89] found id: ""
	I1210 07:52:24.423169  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.423176  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:24.423181  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:24.423237  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:24.448495  418823 cri.go:89] found id: ""
	I1210 07:52:24.448509  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.448517  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:24.448522  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:24.448582  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:24.473213  418823 cri.go:89] found id: ""
	I1210 07:52:24.473228  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.473244  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:24.473250  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:24.473311  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:24.498332  418823 cri.go:89] found id: ""
	I1210 07:52:24.498346  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.498363  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:24.498371  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:24.498386  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:24.512582  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:24.512599  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:24.576630  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:24.568982   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.569512   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.570996   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.571433   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.572843   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:24.568982   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.569512   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.570996   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.571433   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.572843   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:24.576640  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:24.576651  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:24.643309  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:24.643329  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:24.671954  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:24.671973  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:27.241302  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:27.251489  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:27.251554  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:27.276224  418823 cri.go:89] found id: ""
	I1210 07:52:27.276239  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.276247  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:27.276252  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:27.276315  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:27.302841  418823 cri.go:89] found id: ""
	I1210 07:52:27.302855  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.302862  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:27.302867  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:27.302934  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:27.329134  418823 cri.go:89] found id: ""
	I1210 07:52:27.329148  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.329155  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:27.329160  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:27.329217  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:27.355218  418823 cri.go:89] found id: ""
	I1210 07:52:27.355233  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.355240  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:27.355245  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:27.355310  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:27.380928  418823 cri.go:89] found id: ""
	I1210 07:52:27.380942  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.380948  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:27.380953  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:27.381016  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:27.405139  418823 cri.go:89] found id: ""
	I1210 07:52:27.405153  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.405160  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:27.405165  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:27.405224  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:27.434261  418823 cri.go:89] found id: ""
	I1210 07:52:27.434274  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.434281  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:27.434288  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:27.434308  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:27.512344  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:27.512364  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:27.526600  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:27.526616  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:27.593338  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:27.585550   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.586220   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.587805   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.588181   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.589629   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:27.585550   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.586220   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.587805   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.588181   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.589629   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:27.593348  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:27.593360  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:27.660306  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:27.660330  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:30.190245  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:30.200692  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:30.200762  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:30.225476  418823 cri.go:89] found id: ""
	I1210 07:52:30.225491  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.225498  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:30.225503  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:30.225561  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:30.252256  418823 cri.go:89] found id: ""
	I1210 07:52:30.252270  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.252277  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:30.252282  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:30.252339  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:30.277929  418823 cri.go:89] found id: ""
	I1210 07:52:30.277943  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.277950  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:30.277955  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:30.278013  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:30.303604  418823 cri.go:89] found id: ""
	I1210 07:52:30.303619  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.303627  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:30.303631  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:30.303695  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:30.328592  418823 cri.go:89] found id: ""
	I1210 07:52:30.328606  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.328620  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:30.328625  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:30.328683  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:30.357680  418823 cri.go:89] found id: ""
	I1210 07:52:30.357694  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.357701  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:30.357706  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:30.357772  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:30.383058  418823 cri.go:89] found id: ""
	I1210 07:52:30.383071  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.383085  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:30.383093  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:30.383103  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:30.451001  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:30.451264  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:30.466690  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:30.466709  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:30.535653  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:30.527209   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.528061   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.529830   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.530197   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.531734   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:30.527209   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.528061   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.529830   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.530197   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.531734   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:30.535662  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:30.535673  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:30.603957  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:30.603978  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:33.138030  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:33.148615  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:33.148680  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:33.174834  418823 cri.go:89] found id: ""
	I1210 07:52:33.174848  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.174855  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:33.174860  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:33.174922  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:33.205206  418823 cri.go:89] found id: ""
	I1210 07:52:33.205221  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.205228  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:33.205233  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:33.205296  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:33.235457  418823 cri.go:89] found id: ""
	I1210 07:52:33.235472  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.235480  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:33.235485  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:33.235548  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:33.260204  418823 cri.go:89] found id: ""
	I1210 07:52:33.260218  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.260225  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:33.260230  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:33.260290  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:33.285426  418823 cri.go:89] found id: ""
	I1210 07:52:33.285440  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.285448  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:33.285453  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:33.285513  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:33.310040  418823 cri.go:89] found id: ""
	I1210 07:52:33.310054  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.310068  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:33.310073  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:33.310135  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:33.334636  418823 cri.go:89] found id: ""
	I1210 07:52:33.334650  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.334658  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:33.334665  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:33.334676  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:33.400914  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:33.392977   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.393712   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.395357   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.395749   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.397284   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:33.392977   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.393712   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.395357   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.395749   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.397284   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:33.400923  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:33.400934  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:33.489102  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:33.489132  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:33.523301  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:33.523319  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:33.590429  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:33.590450  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:36.107174  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:36.117293  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:36.117353  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:36.141455  418823 cri.go:89] found id: ""
	I1210 07:52:36.141469  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.141477  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:36.141482  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:36.141541  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:36.172812  418823 cri.go:89] found id: ""
	I1210 07:52:36.172826  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.172833  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:36.172838  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:36.172901  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:36.201760  418823 cri.go:89] found id: ""
	I1210 07:52:36.201774  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.201781  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:36.201786  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:36.201845  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:36.227525  418823 cri.go:89] found id: ""
	I1210 07:52:36.227539  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.227553  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:36.227558  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:36.227617  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:36.255643  418823 cri.go:89] found id: ""
	I1210 07:52:36.255657  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.255664  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:36.255669  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:36.255729  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:36.281030  418823 cri.go:89] found id: ""
	I1210 07:52:36.281044  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.281052  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:36.281057  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:36.281115  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:36.307190  418823 cri.go:89] found id: ""
	I1210 07:52:36.307204  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.307211  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:36.307219  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:36.307231  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:36.321687  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:36.321705  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:36.383640  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:36.374708   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.375130   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.376841   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.377402   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.379213   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:36.374708   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.375130   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.376841   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.377402   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.379213   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:36.383650  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:36.383672  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:36.452123  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:36.452142  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:36.485724  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:36.485743  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:39.051733  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:39.062052  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:39.062152  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:39.086707  418823 cri.go:89] found id: ""
	I1210 07:52:39.086722  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.086729  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:39.086734  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:39.086793  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:39.111720  418823 cri.go:89] found id: ""
	I1210 07:52:39.111734  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.111742  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:39.111747  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:39.111807  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:39.135349  418823 cri.go:89] found id: ""
	I1210 07:52:39.135364  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.135371  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:39.135376  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:39.135435  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:39.160834  418823 cri.go:89] found id: ""
	I1210 07:52:39.160857  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.160865  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:39.160871  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:39.160938  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:39.189613  418823 cri.go:89] found id: ""
	I1210 07:52:39.189626  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.189634  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:39.189639  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:39.189696  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:39.214373  418823 cri.go:89] found id: ""
	I1210 07:52:39.214387  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.214394  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:39.214400  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:39.214457  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:39.239814  418823 cri.go:89] found id: ""
	I1210 07:52:39.239829  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.239837  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:39.239845  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:39.239856  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:39.304237  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:39.304257  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:39.320565  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:39.320583  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:39.389276  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:39.381128   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.381864   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.383397   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.383829   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.385337   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:39.381128   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.381864   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.383397   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.383829   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.385337   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:39.389286  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:39.389297  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:39.466908  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:39.466930  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:42.005528  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:42.023294  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:42.023367  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:42.058874  418823 cri.go:89] found id: ""
	I1210 07:52:42.058903  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.058911  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:42.058932  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:42.059040  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:42.089784  418823 cri.go:89] found id: ""
	I1210 07:52:42.089801  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.089809  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:42.089814  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:42.089881  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:42.121634  418823 cri.go:89] found id: ""
	I1210 07:52:42.121650  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.121658  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:42.121663  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:42.121737  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:42.153538  418823 cri.go:89] found id: ""
	I1210 07:52:42.153555  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.153563  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:42.153569  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:42.153644  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:42.183586  418823 cri.go:89] found id: ""
	I1210 07:52:42.183603  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.183611  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:42.183619  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:42.183688  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:42.213049  418823 cri.go:89] found id: ""
	I1210 07:52:42.213067  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.213078  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:42.213084  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:42.213165  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:42.242211  418823 cri.go:89] found id: ""
	I1210 07:52:42.242229  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.242241  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:42.242250  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:42.242268  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:42.258546  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:42.258571  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:42.332221  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:42.323428   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.324132   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.325892   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.326478   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.328088   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:42.323428   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.324132   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.325892   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.326478   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.328088   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:42.332230  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:42.332241  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:42.398832  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:42.398851  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:42.439292  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:42.439308  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:45.012889  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:45.052510  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:45.052580  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:45.096465  418823 cri.go:89] found id: ""
	I1210 07:52:45.096488  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.096496  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:45.096501  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:45.096574  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:45.131426  418823 cri.go:89] found id: ""
	I1210 07:52:45.131442  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.131450  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:45.131456  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:45.131530  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:45.179314  418823 cri.go:89] found id: ""
	I1210 07:52:45.179331  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.179340  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:45.179345  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:45.179416  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:45.224508  418823 cri.go:89] found id: ""
	I1210 07:52:45.224525  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.224534  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:45.224540  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:45.224616  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:45.259822  418823 cri.go:89] found id: ""
	I1210 07:52:45.259850  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.259859  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:45.259870  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:45.259980  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:45.289141  418823 cri.go:89] found id: ""
	I1210 07:52:45.289157  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.289164  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:45.289170  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:45.289256  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:45.317720  418823 cri.go:89] found id: ""
	I1210 07:52:45.317749  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.317764  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:45.317796  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:45.317831  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:45.385230  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:45.375821   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.376581   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.378081   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.378688   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.380382   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:45.375821   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.376581   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.378081   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.378688   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.380382   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:45.385240  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:45.385251  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:45.456646  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:45.456667  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:45.489700  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:45.489717  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:45.554187  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:45.554206  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:48.069065  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:48.079822  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:48.079950  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:48.110229  418823 cri.go:89] found id: ""
	I1210 07:52:48.110244  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.110251  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:48.110256  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:48.110317  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:48.138842  418823 cri.go:89] found id: ""
	I1210 07:52:48.138856  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.138864  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:48.138869  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:48.138928  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:48.164708  418823 cri.go:89] found id: ""
	I1210 07:52:48.164722  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.164730  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:48.164735  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:48.164793  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:48.190030  418823 cri.go:89] found id: ""
	I1210 07:52:48.190056  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.190063  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:48.190069  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:48.190160  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:48.214783  418823 cri.go:89] found id: ""
	I1210 07:52:48.214798  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.214824  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:48.214830  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:48.214899  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:48.242669  418823 cri.go:89] found id: ""
	I1210 07:52:48.242684  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.242692  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:48.242697  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:48.242758  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:48.269761  418823 cri.go:89] found id: ""
	I1210 07:52:48.269776  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.269784  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:48.269791  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:48.269802  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:48.334847  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:48.334871  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:48.349781  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:48.349796  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:48.422853  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:48.408060   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.408898   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.411067   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.411847   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.413714   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:48.408060   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.408898   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.411067   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.411847   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.413714   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:48.422867  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:48.422877  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:48.504694  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:48.504717  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:51.036528  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:51.046592  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:51.046665  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:51.073731  418823 cri.go:89] found id: ""
	I1210 07:52:51.073746  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.073753  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:51.073759  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:51.073819  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:51.100005  418823 cri.go:89] found id: ""
	I1210 07:52:51.100019  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.100027  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:51.100031  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:51.100095  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:51.125872  418823 cri.go:89] found id: ""
	I1210 07:52:51.125897  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.125905  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:51.125910  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:51.125970  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:51.151761  418823 cri.go:89] found id: ""
	I1210 07:52:51.151775  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.151783  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:51.151788  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:51.151846  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:51.178046  418823 cri.go:89] found id: ""
	I1210 07:52:51.178060  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.178068  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:51.178074  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:51.178143  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:51.205729  418823 cri.go:89] found id: ""
	I1210 07:52:51.205743  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.205750  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:51.205756  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:51.205813  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:51.231485  418823 cri.go:89] found id: ""
	I1210 07:52:51.231498  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.231505  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:51.231512  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:51.231522  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:51.295749  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:51.295769  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:51.310814  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:51.310832  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:51.374238  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:51.365806   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.366220   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.367678   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.368628   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.370186   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:51.365806   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.366220   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.367678   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.368628   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.370186   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:51.374248  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:51.374260  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:51.442190  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:51.442209  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:53.979674  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:53.989805  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:53.989873  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:54.022480  418823 cri.go:89] found id: ""
	I1210 07:52:54.022494  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.022501  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:54.022507  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:54.022571  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:54.049837  418823 cri.go:89] found id: ""
	I1210 07:52:54.049851  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.049858  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:54.049864  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:54.049924  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:54.079149  418823 cri.go:89] found id: ""
	I1210 07:52:54.079164  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.079172  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:54.079177  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:54.079244  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:54.110317  418823 cri.go:89] found id: ""
	I1210 07:52:54.110332  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.110339  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:54.110344  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:54.110401  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:54.137776  418823 cri.go:89] found id: ""
	I1210 07:52:54.137798  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.137806  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:54.137812  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:54.137873  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:54.162601  418823 cri.go:89] found id: ""
	I1210 07:52:54.162615  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.162622  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:54.162629  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:54.162690  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:54.188677  418823 cri.go:89] found id: ""
	I1210 07:52:54.188691  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.188698  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:54.188706  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:54.188720  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:54.255918  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:54.255940  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:54.270493  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:54.270513  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:54.347104  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:54.330795   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.331491   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.333174   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.333794   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.335404   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:54.330795   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.331491   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.333174   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.333794   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.335404   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:54.347114  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:54.347127  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:54.415651  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:54.415676  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:56.950504  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:56.960908  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:56.960974  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:56.986942  418823 cri.go:89] found id: ""
	I1210 07:52:56.986957  418823 logs.go:282] 0 containers: []
	W1210 07:52:56.986964  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:56.986969  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:56.987046  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:57.014060  418823 cri.go:89] found id: ""
	I1210 07:52:57.014088  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.014095  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:57.014100  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:57.014192  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:57.040046  418823 cri.go:89] found id: ""
	I1210 07:52:57.040061  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.040069  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:57.040075  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:57.040139  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:57.065400  418823 cri.go:89] found id: ""
	I1210 07:52:57.065427  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.065435  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:57.065441  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:57.065511  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:57.094105  418823 cri.go:89] found id: ""
	I1210 07:52:57.094127  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.094135  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:57.094140  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:57.094203  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:57.120409  418823 cri.go:89] found id: ""
	I1210 07:52:57.120425  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.120432  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:57.120438  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:57.120498  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:57.146119  418823 cri.go:89] found id: ""
	I1210 07:52:57.146134  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.146142  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:57.146150  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:57.146160  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:57.160510  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:57.160526  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:57.225221  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:57.216448   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.217062   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.218858   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.219548   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.221235   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:57.216448   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.217062   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.218858   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.219548   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.221235   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:57.225232  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:57.225253  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:57.293765  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:57.293785  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:57.326044  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:57.326061  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:59.896294  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:59.906460  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:59.906522  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:59.930908  418823 cri.go:89] found id: ""
	I1210 07:52:59.930922  418823 logs.go:282] 0 containers: []
	W1210 07:52:59.930930  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:59.930935  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:59.930999  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:59.956028  418823 cri.go:89] found id: ""
	I1210 07:52:59.956042  418823 logs.go:282] 0 containers: []
	W1210 07:52:59.956049  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:59.956054  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:59.956120  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:59.981032  418823 cri.go:89] found id: ""
	I1210 07:52:59.981046  418823 logs.go:282] 0 containers: []
	W1210 07:52:59.981053  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:59.981058  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:59.981116  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:00.027952  418823 cri.go:89] found id: ""
	I1210 07:53:00.027967  418823 logs.go:282] 0 containers: []
	W1210 07:53:00.027975  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:00.027981  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:00.028053  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:00.149242  418823 cri.go:89] found id: ""
	I1210 07:53:00.149275  418823 logs.go:282] 0 containers: []
	W1210 07:53:00.149301  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:00.149308  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:00.149381  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:00.205658  418823 cri.go:89] found id: ""
	I1210 07:53:00.205676  418823 logs.go:282] 0 containers: []
	W1210 07:53:00.205684  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:00.205691  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:00.205842  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:00.272868  418823 cri.go:89] found id: ""
	I1210 07:53:00.272884  418823 logs.go:282] 0 containers: []
	W1210 07:53:00.272892  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:00.272901  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:00.272914  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:00.364734  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:00.349008   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.349679   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.351763   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.352235   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.354014   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:00.349008   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.349679   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.351763   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.352235   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.354014   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:00.364745  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:00.364757  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:00.441561  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:00.441581  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:00.486703  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:00.486722  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:00.551636  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:00.551658  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:03.068015  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:03.078410  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:03.078481  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:03.103362  418823 cri.go:89] found id: ""
	I1210 07:53:03.103378  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.103385  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:03.103391  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:03.103451  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:03.129650  418823 cri.go:89] found id: ""
	I1210 07:53:03.129668  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.129676  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:03.129681  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:03.129753  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:03.156057  418823 cri.go:89] found id: ""
	I1210 07:53:03.156072  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.156079  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:03.156085  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:03.156143  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:03.181869  418823 cri.go:89] found id: ""
	I1210 07:53:03.181895  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.181903  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:03.181908  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:03.181976  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:03.210043  418823 cri.go:89] found id: ""
	I1210 07:53:03.210056  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.210064  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:03.210069  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:03.210148  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:03.234991  418823 cri.go:89] found id: ""
	I1210 07:53:03.235006  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.235046  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:03.235051  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:03.235119  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:03.261578  418823 cri.go:89] found id: ""
	I1210 07:53:03.261605  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.261612  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:03.261620  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:03.261630  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:03.326335  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:03.326355  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:03.340836  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:03.340853  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:03.407609  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:03.398750   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.399755   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.401402   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.401872   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.403636   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:03.398750   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.399755   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.401402   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.401872   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.403636   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:03.407623  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:03.407637  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:03.494941  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:03.494964  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:06.031492  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:06.042260  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:06.042330  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:06.069383  418823 cri.go:89] found id: ""
	I1210 07:53:06.069398  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.069405  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:06.069410  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:06.069471  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:06.095692  418823 cri.go:89] found id: ""
	I1210 07:53:06.095706  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.095713  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:06.095718  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:06.095783  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:06.122565  418823 cri.go:89] found id: ""
	I1210 07:53:06.122579  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.122585  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:06.122590  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:06.122647  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:06.147461  418823 cri.go:89] found id: ""
	I1210 07:53:06.147476  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.147483  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:06.147489  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:06.147549  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:06.172221  418823 cri.go:89] found id: ""
	I1210 07:53:06.172235  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.172243  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:06.172248  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:06.172306  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:06.200403  418823 cri.go:89] found id: ""
	I1210 07:53:06.200417  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.200424  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:06.200429  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:06.200487  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:06.224557  418823 cri.go:89] found id: ""
	I1210 07:53:06.224572  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.224578  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:06.224586  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:06.224597  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:06.285061  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:06.277547   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.278040   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.279487   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.279882   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.281310   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:06.277547   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.278040   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.279487   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.279882   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.281310   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:06.285071  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:06.285082  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:06.351298  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:06.351317  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:06.379592  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:06.379609  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:06.448278  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:06.448298  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:08.966418  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:08.976886  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:08.976953  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:09.010205  418823 cri.go:89] found id: ""
	I1210 07:53:09.010221  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.010248  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:09.010253  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:09.010336  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:09.039128  418823 cri.go:89] found id: ""
	I1210 07:53:09.039143  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.039150  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:09.039155  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:09.039225  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:09.066093  418823 cri.go:89] found id: ""
	I1210 07:53:09.066108  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.066116  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:09.066121  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:09.066218  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:09.091920  418823 cri.go:89] found id: ""
	I1210 07:53:09.091934  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.091948  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:09.091953  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:09.092014  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:09.118286  418823 cri.go:89] found id: ""
	I1210 07:53:09.118301  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.118309  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:09.118314  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:09.118374  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:09.143614  418823 cri.go:89] found id: ""
	I1210 07:53:09.143628  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.143635  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:09.143641  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:09.143705  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:09.168425  418823 cri.go:89] found id: ""
	I1210 07:53:09.168440  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.168447  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:09.168455  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:09.168465  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:09.236920  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:09.236943  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:09.269085  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:09.269103  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:09.339867  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:09.339886  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:09.354523  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:09.354541  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:09.432066  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:09.420805   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.421538   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.423585   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.424589   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.425304   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:09.420805   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.421538   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.423585   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.424589   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.425304   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:11.933763  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:11.943879  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:11.943943  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:11.969555  418823 cri.go:89] found id: ""
	I1210 07:53:11.969578  418823 logs.go:282] 0 containers: []
	W1210 07:53:11.969586  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:11.969591  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:11.969663  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:11.997107  418823 cri.go:89] found id: ""
	I1210 07:53:11.997121  418823 logs.go:282] 0 containers: []
	W1210 07:53:11.997128  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:11.997133  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:11.997198  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:12.025616  418823 cri.go:89] found id: ""
	I1210 07:53:12.025630  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.025638  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:12.025644  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:12.025712  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:12.052893  418823 cri.go:89] found id: ""
	I1210 07:53:12.052906  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.052914  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:12.052919  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:12.052983  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:12.077956  418823 cri.go:89] found id: ""
	I1210 07:53:12.077979  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.077988  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:12.077993  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:12.078064  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:12.104169  418823 cri.go:89] found id: ""
	I1210 07:53:12.104183  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.104200  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:12.104207  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:12.104278  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:12.130790  418823 cri.go:89] found id: ""
	I1210 07:53:12.130804  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.130812  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:12.130819  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:12.130831  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:12.194759  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:12.194778  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:12.209969  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:12.209985  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:12.272708  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:12.265266   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.265649   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.267247   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.267585   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.269013   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:12.265266   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.265649   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.267247   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.267585   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.269013   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:12.272718  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:12.272730  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:12.339739  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:12.339759  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:14.870834  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:14.882996  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:14.883096  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:14.912032  418823 cri.go:89] found id: ""
	I1210 07:53:14.912046  418823 logs.go:282] 0 containers: []
	W1210 07:53:14.912053  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:14.912059  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:14.912116  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:14.937034  418823 cri.go:89] found id: ""
	I1210 07:53:14.937048  418823 logs.go:282] 0 containers: []
	W1210 07:53:14.937056  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:14.937061  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:14.937122  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:14.962165  418823 cri.go:89] found id: ""
	I1210 07:53:14.962180  418823 logs.go:282] 0 containers: []
	W1210 07:53:14.962187  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:14.962192  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:14.962256  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:14.987169  418823 cri.go:89] found id: ""
	I1210 07:53:14.987182  418823 logs.go:282] 0 containers: []
	W1210 07:53:14.987190  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:14.987194  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:14.987250  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:15.026690  418823 cri.go:89] found id: ""
	I1210 07:53:15.026706  418823 logs.go:282] 0 containers: []
	W1210 07:53:15.026714  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:15.026719  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:15.026788  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:15.057882  418823 cri.go:89] found id: ""
	I1210 07:53:15.057896  418823 logs.go:282] 0 containers: []
	W1210 07:53:15.057903  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:15.057908  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:15.057977  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:15.084042  418823 cri.go:89] found id: ""
	I1210 07:53:15.084057  418823 logs.go:282] 0 containers: []
	W1210 07:53:15.084064  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:15.084072  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:15.084082  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:15.114864  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:15.114880  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:15.179901  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:15.179922  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:15.194821  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:15.194838  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:15.259725  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:15.250726   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.251670   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.253338   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.254117   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.255652   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:15.250726   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.251670   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.253338   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.254117   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.255652   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:15.259735  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:15.259747  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:17.826809  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:17.837193  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:17.837254  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:17.863390  418823 cri.go:89] found id: ""
	I1210 07:53:17.863404  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.863411  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:17.863416  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:17.863481  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:17.893221  418823 cri.go:89] found id: ""
	I1210 07:53:17.893236  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.893243  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:17.893248  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:17.893306  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:17.921130  418823 cri.go:89] found id: ""
	I1210 07:53:17.921155  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.921163  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:17.921168  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:17.921236  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:17.945888  418823 cri.go:89] found id: ""
	I1210 07:53:17.945901  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.945909  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:17.945914  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:17.945972  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:17.970988  418823 cri.go:89] found id: ""
	I1210 07:53:17.971002  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.971022  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:17.971027  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:17.971097  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:17.996399  418823 cri.go:89] found id: ""
	I1210 07:53:17.996413  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.996420  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:17.996425  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:17.996494  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:18.023886  418823 cri.go:89] found id: ""
	I1210 07:53:18.023900  418823 logs.go:282] 0 containers: []
	W1210 07:53:18.023908  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:18.023931  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:18.023947  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:18.090117  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:18.090136  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:18.105261  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:18.105280  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:18.174300  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:18.166057   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.166727   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.168471   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.168892   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.170326   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:18.166057   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.166727   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.168471   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.168892   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.170326   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:18.174310  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:18.174322  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:18.241759  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:18.241779  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:20.779144  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:20.788940  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:20.788999  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:20.814543  418823 cri.go:89] found id: ""
	I1210 07:53:20.814557  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.814564  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:20.814569  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:20.814634  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:20.839723  418823 cri.go:89] found id: ""
	I1210 07:53:20.839737  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.839744  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:20.839749  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:20.839808  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:20.869222  418823 cri.go:89] found id: ""
	I1210 07:53:20.869237  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.869244  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:20.869249  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:20.869310  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:20.893562  418823 cri.go:89] found id: ""
	I1210 07:53:20.893576  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.893593  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:20.893598  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:20.893664  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:20.919439  418823 cri.go:89] found id: ""
	I1210 07:53:20.919454  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.919461  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:20.919466  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:20.919526  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:20.947602  418823 cri.go:89] found id: ""
	I1210 07:53:20.947617  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.947624  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:20.947629  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:20.947688  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:20.976621  418823 cri.go:89] found id: ""
	I1210 07:53:20.976635  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.976642  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:20.976650  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:20.976666  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:21.040860  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:21.040884  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:21.055749  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:21.055767  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:21.122414  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:21.114763   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.115197   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.116691   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.116998   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.118463   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:21.114763   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.115197   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.116691   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.116998   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.118463   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:21.122458  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:21.122468  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:21.188312  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:21.188333  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:23.717609  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:23.730817  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:23.730882  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:23.756488  418823 cri.go:89] found id: ""
	I1210 07:53:23.756504  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.756512  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:23.756518  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:23.756584  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:23.782540  418823 cri.go:89] found id: ""
	I1210 07:53:23.782555  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.782562  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:23.782567  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:23.782626  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:23.807181  418823 cri.go:89] found id: ""
	I1210 07:53:23.807195  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.807204  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:23.807209  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:23.807273  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:23.831876  418823 cri.go:89] found id: ""
	I1210 07:53:23.831891  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.831900  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:23.831905  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:23.831964  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:23.858557  418823 cri.go:89] found id: ""
	I1210 07:53:23.858572  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.858580  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:23.858585  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:23.858646  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:23.883797  418823 cri.go:89] found id: ""
	I1210 07:53:23.883811  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.883820  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:23.883825  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:23.883922  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:23.913668  418823 cri.go:89] found id: ""
	I1210 07:53:23.913682  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.913690  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:23.913698  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:23.913709  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:23.977126  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:23.968779   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.969424   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.971050   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.971601   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.973232   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:23.968779   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.969424   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.971050   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.971601   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.973232   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:23.977136  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:23.977147  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:24.045089  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:24.045110  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:24.076143  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:24.076161  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:24.142779  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:24.142798  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:26.658408  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:26.669312  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:26.669374  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:26.697592  418823 cri.go:89] found id: ""
	I1210 07:53:26.697607  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.697615  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:26.697621  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:26.697687  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:26.725323  418823 cri.go:89] found id: ""
	I1210 07:53:26.725363  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.725370  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:26.725375  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:26.725433  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:26.754039  418823 cri.go:89] found id: ""
	I1210 07:53:26.754053  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.754060  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:26.754066  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:26.754122  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:26.788322  418823 cri.go:89] found id: ""
	I1210 07:53:26.788337  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.788344  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:26.788349  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:26.788408  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:26.818143  418823 cri.go:89] found id: ""
	I1210 07:53:26.818157  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.818180  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:26.818185  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:26.818246  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:26.845686  418823 cri.go:89] found id: ""
	I1210 07:53:26.845699  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.845707  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:26.845714  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:26.845772  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:26.871522  418823 cri.go:89] found id: ""
	I1210 07:53:26.871536  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.871544  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:26.871552  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:26.871568  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:26.902527  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:26.902544  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:26.967583  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:26.967603  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:26.982258  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:26.982275  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:27.053700  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:27.045382   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.046023   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.047684   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.048159   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.049667   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:27.045382   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.046023   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.047684   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.048159   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.049667   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:27.053710  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:27.053722  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:29.623259  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:29.633196  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:29.633265  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:29.658246  418823 cri.go:89] found id: ""
	I1210 07:53:29.658271  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.658278  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:29.658283  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:29.658358  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:29.685747  418823 cri.go:89] found id: ""
	I1210 07:53:29.685762  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.685769  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:29.685775  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:29.685842  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:29.721266  418823 cri.go:89] found id: ""
	I1210 07:53:29.721280  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.721288  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:29.721292  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:29.721350  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:29.746632  418823 cri.go:89] found id: ""
	I1210 07:53:29.746647  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.746655  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:29.746660  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:29.746718  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:29.771709  418823 cri.go:89] found id: ""
	I1210 07:53:29.771725  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.771732  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:29.771737  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:29.771800  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:29.801580  418823 cri.go:89] found id: ""
	I1210 07:53:29.801595  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.801602  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:29.801608  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:29.801673  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:29.827750  418823 cri.go:89] found id: ""
	I1210 07:53:29.827764  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.827771  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:29.827780  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:29.827795  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:29.893437  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:29.885346   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.885722   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.887338   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.887997   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.889654   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:29.885346   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.885722   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.887338   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.887997   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.889654   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:29.893447  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:29.893458  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:29.960399  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:29.960419  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:29.991781  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:29.991799  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:30.072819  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:30.072841  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:32.588396  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:32.598821  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:32.598882  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:32.628590  418823 cri.go:89] found id: ""
	I1210 07:53:32.628604  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.628611  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:32.628616  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:32.628678  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:32.658338  418823 cri.go:89] found id: ""
	I1210 07:53:32.658352  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.658359  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:32.658364  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:32.658424  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:32.701705  418823 cri.go:89] found id: ""
	I1210 07:53:32.701719  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.701727  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:32.701732  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:32.701792  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:32.735461  418823 cri.go:89] found id: ""
	I1210 07:53:32.735476  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.735483  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:32.735488  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:32.735548  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:32.761096  418823 cri.go:89] found id: ""
	I1210 07:53:32.761109  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.761116  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:32.761121  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:32.761180  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:32.787468  418823 cri.go:89] found id: ""
	I1210 07:53:32.787481  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.787488  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:32.787493  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:32.787553  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:32.813085  418823 cri.go:89] found id: ""
	I1210 07:53:32.813098  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.813105  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:32.813113  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:32.813123  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:32.881504  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:32.873323   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.874057   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.875714   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.876057   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.877280   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:32.873323   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.874057   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.875714   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.876057   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.877280   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:32.881541  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:32.881552  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:32.951245  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:32.951265  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:32.980096  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:32.980113  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:33.046381  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:33.046400  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:35.561454  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:35.571515  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:35.571579  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:35.596461  418823 cri.go:89] found id: ""
	I1210 07:53:35.596476  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.596483  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:35.596488  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:35.596547  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:35.623764  418823 cri.go:89] found id: ""
	I1210 07:53:35.623780  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.623787  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:35.623792  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:35.623852  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:35.649136  418823 cri.go:89] found id: ""
	I1210 07:53:35.649150  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.649159  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:35.649164  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:35.649267  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:35.689785  418823 cri.go:89] found id: ""
	I1210 07:53:35.689799  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.689806  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:35.689820  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:35.689883  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:35.717073  418823 cri.go:89] found id: ""
	I1210 07:53:35.717086  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.717104  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:35.717109  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:35.717167  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:35.747852  418823 cri.go:89] found id: ""
	I1210 07:53:35.747866  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.747874  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:35.747879  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:35.747936  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:35.772479  418823 cri.go:89] found id: ""
	I1210 07:53:35.772493  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.772500  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:35.772508  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:35.772519  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:35.843052  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:35.843075  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:35.857842  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:35.857859  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:35.927434  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:35.918703   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.919740   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.921421   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.921929   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.923509   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:35.918703   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.919740   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.921421   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.921929   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.923509   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:35.927445  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:35.927457  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:35.996278  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:35.996299  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:38.532848  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:38.543645  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:38.543706  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:38.573367  418823 cri.go:89] found id: ""
	I1210 07:53:38.573382  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.573389  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:38.573394  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:38.573456  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:38.603108  418823 cri.go:89] found id: ""
	I1210 07:53:38.603122  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.603129  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:38.603134  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:38.603193  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:38.629381  418823 cri.go:89] found id: ""
	I1210 07:53:38.629395  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.629402  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:38.629407  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:38.629467  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:38.662313  418823 cri.go:89] found id: ""
	I1210 07:53:38.662327  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.662334  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:38.662339  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:38.662402  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:38.704257  418823 cri.go:89] found id: ""
	I1210 07:53:38.704271  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.704279  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:38.704284  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:38.704346  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:38.734287  418823 cri.go:89] found id: ""
	I1210 07:53:38.734302  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.734309  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:38.734315  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:38.734375  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:38.760452  418823 cri.go:89] found id: ""
	I1210 07:53:38.760467  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.760474  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:38.760483  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:38.760493  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:38.827227  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:38.827248  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:38.841994  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:38.842011  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:38.909535  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:38.899398   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.900949   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.901647   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.903592   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.904308   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:38.899398   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.900949   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.901647   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.903592   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.904308   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:38.909548  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:38.909559  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:38.977890  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:38.977912  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:41.514495  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:41.524880  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:41.524939  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:41.550178  418823 cri.go:89] found id: ""
	I1210 07:53:41.550208  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.550216  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:41.550220  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:41.550289  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:41.578068  418823 cri.go:89] found id: ""
	I1210 07:53:41.578090  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.578097  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:41.578102  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:41.578175  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:41.603754  418823 cri.go:89] found id: ""
	I1210 07:53:41.603768  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.603776  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:41.603782  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:41.603840  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:41.628986  418823 cri.go:89] found id: ""
	I1210 07:53:41.629000  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.629008  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:41.629013  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:41.629072  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:41.654287  418823 cri.go:89] found id: ""
	I1210 07:53:41.654302  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.654309  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:41.654314  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:41.654384  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:41.688416  418823 cri.go:89] found id: ""
	I1210 07:53:41.688430  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.688437  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:41.688442  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:41.688498  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:41.713499  418823 cri.go:89] found id: ""
	I1210 07:53:41.713513  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.713521  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:41.713528  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:41.713538  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:41.730410  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:41.730426  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:41.799336  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:41.790498   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.791386   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.793149   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.793773   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.794989   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:41.790498   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.791386   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.793149   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.793773   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.794989   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:41.799346  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:41.799357  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:41.867347  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:41.867369  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:41.895652  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:41.895669  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:44.462932  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:44.472795  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:44.472854  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:44.504932  418823 cri.go:89] found id: ""
	I1210 07:53:44.504947  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.504960  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:44.504965  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:44.505025  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:44.535103  418823 cri.go:89] found id: ""
	I1210 07:53:44.535125  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.535133  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:44.535138  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:44.535204  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:44.560225  418823 cri.go:89] found id: ""
	I1210 07:53:44.560239  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.560247  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:44.560252  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:44.560310  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:44.585575  418823 cri.go:89] found id: ""
	I1210 07:53:44.585597  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.585604  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:44.585609  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:44.585668  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:44.611737  418823 cri.go:89] found id: ""
	I1210 07:53:44.611751  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.611758  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:44.611763  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:44.611824  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:44.636495  418823 cri.go:89] found id: ""
	I1210 07:53:44.636510  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.636517  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:44.636522  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:44.636580  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:44.665441  418823 cri.go:89] found id: ""
	I1210 07:53:44.665455  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.665463  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:44.665471  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:44.665481  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:44.702032  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:44.702048  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:44.776362  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:44.776383  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:44.792240  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:44.792256  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:44.854270  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:44.846404   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.846945   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.848504   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.848953   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.850457   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:44.846404   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.846945   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.848504   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.848953   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.850457   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:44.854279  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:44.854291  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:47.423978  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:47.436858  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:47.436919  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:47.461997  418823 cri.go:89] found id: ""
	I1210 07:53:47.462011  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.462018  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:47.462023  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:47.462125  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:47.487419  418823 cri.go:89] found id: ""
	I1210 07:53:47.487434  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.487441  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:47.487446  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:47.487504  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:47.512823  418823 cri.go:89] found id: ""
	I1210 07:53:47.512837  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.512845  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:47.512850  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:47.512913  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:47.538819  418823 cri.go:89] found id: ""
	I1210 07:53:47.538833  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.538840  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:47.538845  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:47.538903  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:47.563454  418823 cri.go:89] found id: ""
	I1210 07:53:47.563468  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.563476  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:47.563481  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:47.563544  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:47.588347  418823 cri.go:89] found id: ""
	I1210 07:53:47.588361  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.588368  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:47.588374  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:47.588435  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:47.613835  418823 cri.go:89] found id: ""
	I1210 07:53:47.613848  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.613855  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:47.613863  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:47.613874  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:47.679468  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:47.679488  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:47.695124  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:47.695148  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:47.764330  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:47.755921   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.756582   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.758176   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.758693   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.760262   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:47.755921   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.756582   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.758176   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.758693   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.760262   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:47.764340  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:47.764350  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:47.834926  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:47.834946  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:50.366762  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:50.376894  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:50.376958  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:50.402825  418823 cri.go:89] found id: ""
	I1210 07:53:50.402839  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.402846  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:50.402851  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:50.402912  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:50.431663  418823 cri.go:89] found id: ""
	I1210 07:53:50.431677  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.431685  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:50.431690  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:50.431748  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:50.458799  418823 cri.go:89] found id: ""
	I1210 07:53:50.458813  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.458821  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:50.458826  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:50.458885  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:50.483609  418823 cri.go:89] found id: ""
	I1210 07:53:50.483623  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.483630  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:50.483635  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:50.483693  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:50.509720  418823 cri.go:89] found id: ""
	I1210 07:53:50.509735  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.509743  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:50.509748  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:50.509808  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:50.535475  418823 cri.go:89] found id: ""
	I1210 07:53:50.535489  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.535496  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:50.535501  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:50.535560  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:50.559559  418823 cri.go:89] found id: ""
	I1210 07:53:50.559572  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.559580  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:50.559587  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:50.559598  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:50.624409  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:50.624430  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:50.639099  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:50.639117  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:50.734659  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:50.717698   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.718879   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.720716   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.721025   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.727758   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:50.717698   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.718879   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.720716   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.721025   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.727758   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:50.734673  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:50.734686  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:50.801764  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:50.801789  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:53.334554  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:53.344704  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:53.344767  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:53.369027  418823 cri.go:89] found id: ""
	I1210 07:53:53.369041  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.369049  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:53.369054  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:53.369112  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:53.392884  418823 cri.go:89] found id: ""
	I1210 07:53:53.392897  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.392904  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:53.392909  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:53.392967  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:53.421604  418823 cri.go:89] found id: ""
	I1210 07:53:53.421618  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.421625  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:53.421630  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:53.421690  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:53.446954  418823 cri.go:89] found id: ""
	I1210 07:53:53.446968  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.446976  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:53.446982  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:53.447078  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:53.472681  418823 cri.go:89] found id: ""
	I1210 07:53:53.472696  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.472703  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:53.472708  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:53.472769  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:53.497847  418823 cri.go:89] found id: ""
	I1210 07:53:53.497861  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.497868  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:53.497873  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:53.497934  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:53.524109  418823 cri.go:89] found id: ""
	I1210 07:53:53.524123  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.524131  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:53.524138  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:53.524149  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:53.593506  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:53.593527  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:53.607933  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:53.607950  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:53.678735  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:53.669132   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.671186   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.671527   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.673119   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.673744   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:53.669132   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.671186   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.671527   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.673119   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.673744   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:53.678745  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:53.678755  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:53.752843  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:53.752865  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:56.287368  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:56.297545  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:56.297605  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:56.327438  418823 cri.go:89] found id: ""
	I1210 07:53:56.327452  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.327459  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:56.327465  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:56.327525  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:56.357601  418823 cri.go:89] found id: ""
	I1210 07:53:56.357616  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.357623  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:56.357627  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:56.357686  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:56.382796  418823 cri.go:89] found id: ""
	I1210 07:53:56.382810  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.382817  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:56.382822  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:56.382878  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:56.410018  418823 cri.go:89] found id: ""
	I1210 07:53:56.410032  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.410039  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:56.410050  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:56.410110  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:56.437449  418823 cri.go:89] found id: ""
	I1210 07:53:56.437472  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.437480  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:56.437485  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:56.437551  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:56.462063  418823 cri.go:89] found id: ""
	I1210 07:53:56.462077  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.462096  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:56.462102  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:56.462178  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:56.489728  418823 cri.go:89] found id: ""
	I1210 07:53:56.489743  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.489750  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:56.489757  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:56.489771  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:56.504129  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:56.504145  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:56.569498  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:56.561896   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.562381   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.563902   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.564206   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.565675   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:56.561896   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.562381   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.563902   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.564206   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.565675   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:56.569507  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:56.569518  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:56.638285  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:56.638304  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:56.676473  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:56.676490  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:59.250249  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:59.260346  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:59.260407  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:59.288615  418823 cri.go:89] found id: ""
	I1210 07:53:59.288633  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.288640  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:59.288645  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:59.288707  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:59.314559  418823 cri.go:89] found id: ""
	I1210 07:53:59.314574  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.314581  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:59.314586  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:59.314652  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:59.339212  418823 cri.go:89] found id: ""
	I1210 07:53:59.339227  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.339235  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:59.339240  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:59.339296  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:59.365478  418823 cri.go:89] found id: ""
	I1210 07:53:59.365493  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.365500  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:59.365505  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:59.365565  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:59.391116  418823 cri.go:89] found id: ""
	I1210 07:53:59.391131  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.391138  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:59.391143  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:59.391204  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:59.417133  418823 cri.go:89] found id: ""
	I1210 07:53:59.417153  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.417161  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:59.417166  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:59.417225  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:59.442940  418823 cri.go:89] found id: ""
	I1210 07:53:59.442954  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.442961  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:59.442968  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:59.442979  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:59.509257  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:59.509277  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:59.541319  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:59.541335  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:59.607451  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:59.607470  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:59.621934  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:59.621951  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:59.693437  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:59.684845   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.685597   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.687307   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.687819   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.689392   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:59.684845   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.685597   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.687307   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.687819   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.689392   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:02.193693  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:02.204795  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:02.204860  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:02.230168  418823 cri.go:89] found id: ""
	I1210 07:54:02.230185  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.230192  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:02.230198  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:02.230311  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:02.263333  418823 cri.go:89] found id: ""
	I1210 07:54:02.263349  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.263356  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:02.263361  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:02.263426  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:02.290361  418823 cri.go:89] found id: ""
	I1210 07:54:02.290376  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.290384  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:02.290388  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:02.290448  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:02.316861  418823 cri.go:89] found id: ""
	I1210 07:54:02.316875  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.316882  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:02.316894  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:02.316951  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:02.343227  418823 cri.go:89] found id: ""
	I1210 07:54:02.343242  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.343250  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:02.343255  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:02.343319  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:02.370541  418823 cri.go:89] found id: ""
	I1210 07:54:02.370555  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.370562  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:02.370567  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:02.370655  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:02.397479  418823 cri.go:89] found id: ""
	I1210 07:54:02.397493  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.397500  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:02.397508  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:02.397522  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:02.463725  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:02.463746  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:02.478295  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:02.478312  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:02.550548  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:02.542348   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.542913   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.544446   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.544888   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.546428   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:02.542348   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.542913   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.544446   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.544888   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.546428   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:02.550558  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:02.550569  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:02.620103  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:02.620125  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:05.149959  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:05.160417  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:05.160482  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:05.189797  418823 cri.go:89] found id: ""
	I1210 07:54:05.189812  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.189826  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:05.189831  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:05.189890  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:05.217788  418823 cri.go:89] found id: ""
	I1210 07:54:05.217815  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.217823  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:05.217828  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:05.217893  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:05.243664  418823 cri.go:89] found id: ""
	I1210 07:54:05.243678  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.243686  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:05.243690  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:05.243749  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:05.269052  418823 cri.go:89] found id: ""
	I1210 07:54:05.269067  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.269075  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:05.269080  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:05.269140  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:05.294538  418823 cri.go:89] found id: ""
	I1210 07:54:05.294552  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.294559  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:05.294564  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:05.294627  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:05.321865  418823 cri.go:89] found id: ""
	I1210 07:54:05.321880  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.321887  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:05.321893  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:05.321954  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:05.348181  418823 cri.go:89] found id: ""
	I1210 07:54:05.348195  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.348203  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:05.348210  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:05.348225  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:05.379036  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:05.379062  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:05.443960  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:05.443981  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:05.458603  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:05.458620  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:05.526883  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:05.519093   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.519877   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.521585   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.522069   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.523123   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:05.519093   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.519877   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.521585   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.522069   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.523123   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:05.526895  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:05.526910  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:08.095997  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:08.105932  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:08.105991  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:08.130974  418823 cri.go:89] found id: ""
	I1210 07:54:08.130988  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.130996  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:08.131001  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:08.131153  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:08.155374  418823 cri.go:89] found id: ""
	I1210 07:54:08.155388  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.155396  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:08.155401  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:08.155458  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:08.180878  418823 cri.go:89] found id: ""
	I1210 07:54:08.180892  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.180899  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:08.180904  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:08.180962  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:08.209651  418823 cri.go:89] found id: ""
	I1210 07:54:08.209664  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.209672  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:08.209676  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:08.209735  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:08.235331  418823 cri.go:89] found id: ""
	I1210 07:54:08.235344  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.235358  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:08.235362  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:08.235421  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:08.260980  418823 cri.go:89] found id: ""
	I1210 07:54:08.260995  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.261003  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:08.261008  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:08.261066  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:08.286809  418823 cri.go:89] found id: ""
	I1210 07:54:08.286824  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.286831  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:08.286838  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:08.286848  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:08.353470  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:08.353491  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:08.367911  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:08.367928  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:08.434091  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:08.426418   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.426959   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.428418   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.428869   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.430318   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:08.426418   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.426959   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.428418   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.428869   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.430318   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:08.434101  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:08.434120  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:08.502201  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:08.502221  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:11.031209  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:11.041439  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:11.041500  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:11.067253  418823 cri.go:89] found id: ""
	I1210 07:54:11.067268  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.067275  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:11.067280  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:11.067339  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:11.092951  418823 cri.go:89] found id: ""
	I1210 07:54:11.092965  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.092972  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:11.092978  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:11.093038  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:11.118430  418823 cri.go:89] found id: ""
	I1210 07:54:11.118445  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.118453  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:11.118458  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:11.118520  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:11.144820  418823 cri.go:89] found id: ""
	I1210 07:54:11.144835  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.144843  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:11.144848  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:11.144914  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:11.173374  418823 cri.go:89] found id: ""
	I1210 07:54:11.173388  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.173396  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:11.173401  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:11.173459  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:11.198352  418823 cri.go:89] found id: ""
	I1210 07:54:11.198367  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.198375  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:11.198380  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:11.198450  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:11.224536  418823 cri.go:89] found id: ""
	I1210 07:54:11.224550  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.224559  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:11.224569  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:11.224579  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:11.290262  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:11.290283  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:11.304639  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:11.304658  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:11.368924  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:11.359948   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.360716   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.362347   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.362996   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.364714   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:11.359948   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.360716   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.362347   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.362996   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.364714   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:11.368934  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:11.368944  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:11.435589  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:11.435610  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:13.966356  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:13.976957  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:13.977022  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:14.004519  418823 cri.go:89] found id: ""
	I1210 07:54:14.004536  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.004546  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:14.004551  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:14.004633  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:14.033357  418823 cri.go:89] found id: ""
	I1210 07:54:14.033372  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.033380  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:14.033385  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:14.033445  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:14.059488  418823 cri.go:89] found id: ""
	I1210 07:54:14.059510  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.059517  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:14.059522  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:14.059585  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:14.087964  418823 cri.go:89] found id: ""
	I1210 07:54:14.087987  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.087996  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:14.088002  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:14.088073  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:14.114469  418823 cri.go:89] found id: ""
	I1210 07:54:14.114483  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.114501  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:14.114507  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:14.114580  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:14.144394  418823 cri.go:89] found id: ""
	I1210 07:54:14.144408  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.144415  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:14.144420  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:14.144482  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:14.173724  418823 cri.go:89] found id: ""
	I1210 07:54:14.173746  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.173754  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:14.173762  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:14.173779  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:14.247855  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:14.239156   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.239953   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.241733   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.242311   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.243958   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:14.239156   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.239953   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.241733   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.242311   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.243958   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:14.247865  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:14.247879  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:14.317778  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:14.317798  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:14.346568  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:14.346586  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:14.412678  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:14.412697  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:16.927406  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:16.938842  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:16.938903  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:16.972184  418823 cri.go:89] found id: ""
	I1210 07:54:16.972197  418823 logs.go:282] 0 containers: []
	W1210 07:54:16.972204  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:16.972209  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:16.972268  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:16.999114  418823 cri.go:89] found id: ""
	I1210 07:54:16.999129  418823 logs.go:282] 0 containers: []
	W1210 07:54:16.999136  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:16.999141  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:16.999204  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:17.026900  418823 cri.go:89] found id: ""
	I1210 07:54:17.026913  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.026921  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:17.026926  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:17.026985  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:17.053121  418823 cri.go:89] found id: ""
	I1210 07:54:17.053135  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.053143  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:17.053148  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:17.053208  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:17.079184  418823 cri.go:89] found id: ""
	I1210 07:54:17.079198  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.079204  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:17.079209  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:17.079268  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:17.104597  418823 cri.go:89] found id: ""
	I1210 07:54:17.104611  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.104619  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:17.104624  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:17.104681  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:17.133412  418823 cri.go:89] found id: ""
	I1210 07:54:17.133426  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.133434  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:17.133441  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:17.133452  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:17.147432  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:17.147452  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:17.210612  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:17.202657   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.203522   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.205130   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.205458   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.206975   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:17.202657   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.203522   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.205130   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.205458   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.206975   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:17.210623  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:17.210634  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:17.279473  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:17.279493  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:17.307828  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:17.307852  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:19.881299  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:19.891315  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:19.891375  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:19.926287  418823 cri.go:89] found id: ""
	I1210 07:54:19.926302  418823 logs.go:282] 0 containers: []
	W1210 07:54:19.926309  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:19.926314  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:19.926373  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:19.961020  418823 cri.go:89] found id: ""
	I1210 07:54:19.961036  418823 logs.go:282] 0 containers: []
	W1210 07:54:19.961043  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:19.961048  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:19.961111  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:19.994369  418823 cri.go:89] found id: ""
	I1210 07:54:19.994383  418823 logs.go:282] 0 containers: []
	W1210 07:54:19.994390  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:19.994395  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:19.994455  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:20.028896  418823 cri.go:89] found id: ""
	I1210 07:54:20.028911  418823 logs.go:282] 0 containers: []
	W1210 07:54:20.028919  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:20.028924  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:20.028989  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:20.059934  418823 cri.go:89] found id: ""
	I1210 07:54:20.059955  418823 logs.go:282] 0 containers: []
	W1210 07:54:20.059963  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:20.060015  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:20.060093  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:20.086606  418823 cri.go:89] found id: ""
	I1210 07:54:20.086622  418823 logs.go:282] 0 containers: []
	W1210 07:54:20.086629  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:20.086635  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:20.086703  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:20.112469  418823 cri.go:89] found id: ""
	I1210 07:54:20.112486  418823 logs.go:282] 0 containers: []
	W1210 07:54:20.112496  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:20.112504  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:20.112515  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:20.176933  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:20.176953  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:20.193125  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:20.193142  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:20.257603  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:20.249385   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.249848   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.251552   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.251918   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.253488   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:20.249385   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.249848   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.251552   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.251918   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.253488   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:20.257614  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:20.257625  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:20.324617  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:20.324638  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:22.853766  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:22.864101  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:22.864164  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:22.888959  418823 cri.go:89] found id: ""
	I1210 07:54:22.888974  418823 logs.go:282] 0 containers: []
	W1210 07:54:22.888981  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:22.888986  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:22.889046  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:22.921447  418823 cri.go:89] found id: ""
	I1210 07:54:22.921460  418823 logs.go:282] 0 containers: []
	W1210 07:54:22.921468  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:22.921473  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:22.921543  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:22.955505  418823 cri.go:89] found id: ""
	I1210 07:54:22.955519  418823 logs.go:282] 0 containers: []
	W1210 07:54:22.955526  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:22.955531  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:22.955594  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:22.986982  418823 cri.go:89] found id: ""
	I1210 07:54:22.986996  418823 logs.go:282] 0 containers: []
	W1210 07:54:22.987004  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:22.987031  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:22.987094  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:23.016264  418823 cri.go:89] found id: ""
	I1210 07:54:23.016279  418823 logs.go:282] 0 containers: []
	W1210 07:54:23.016286  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:23.016291  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:23.016354  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:23.046460  418823 cri.go:89] found id: ""
	I1210 07:54:23.046474  418823 logs.go:282] 0 containers: []
	W1210 07:54:23.046482  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:23.046507  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:23.046577  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:23.074337  418823 cri.go:89] found id: ""
	I1210 07:54:23.074352  418823 logs.go:282] 0 containers: []
	W1210 07:54:23.074361  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:23.074369  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:23.074384  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:23.139358  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:23.139380  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:23.154211  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:23.154233  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:23.215488  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:23.206516   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.207119   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.208073   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.209680   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.210273   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:23.206516   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.207119   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.208073   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.209680   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.210273   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:23.215499  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:23.215512  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:23.282950  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:23.282971  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:25.812054  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:25.822192  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:25.822255  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:25.847807  418823 cri.go:89] found id: ""
	I1210 07:54:25.847822  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.847831  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:25.847836  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:25.847900  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:25.876611  418823 cri.go:89] found id: ""
	I1210 07:54:25.876626  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.876634  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:25.876638  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:25.876698  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:25.902947  418823 cri.go:89] found id: ""
	I1210 07:54:25.902961  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.902968  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:25.902973  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:25.903056  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:25.944041  418823 cri.go:89] found id: ""
	I1210 07:54:25.944055  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.944062  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:25.944068  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:25.944128  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:25.970835  418823 cri.go:89] found id: ""
	I1210 07:54:25.970849  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.970857  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:25.970862  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:25.970923  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:26.003198  418823 cri.go:89] found id: ""
	I1210 07:54:26.003214  418823 logs.go:282] 0 containers: []
	W1210 07:54:26.003222  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:26.003228  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:26.003300  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:26.032526  418823 cri.go:89] found id: ""
	I1210 07:54:26.032540  418823 logs.go:282] 0 containers: []
	W1210 07:54:26.032548  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:26.032556  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:26.032569  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:26.099635  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:26.099655  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:26.114354  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:26.114373  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:26.179258  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:26.170492   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.171349   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.173076   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.173401   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.174907   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:26.170492   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.171349   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.173076   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.173401   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.174907   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:26.179269  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:26.179281  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:26.248336  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:26.248355  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:28.782480  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:28.792391  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:28.792450  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:28.817311  418823 cri.go:89] found id: ""
	I1210 07:54:28.817325  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.817332  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:28.817338  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:28.817393  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:28.841584  418823 cri.go:89] found id: ""
	I1210 07:54:28.841597  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.841605  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:28.841609  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:28.841666  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:28.867004  418823 cri.go:89] found id: ""
	I1210 07:54:28.867040  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.867048  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:28.867052  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:28.867110  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:28.891591  418823 cri.go:89] found id: ""
	I1210 07:54:28.891604  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.891615  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:28.891621  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:28.891677  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:28.927624  418823 cri.go:89] found id: ""
	I1210 07:54:28.927637  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.927645  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:28.927650  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:28.927714  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:28.955409  418823 cri.go:89] found id: ""
	I1210 07:54:28.955423  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.955430  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:28.955435  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:28.955493  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:28.980779  418823 cri.go:89] found id: ""
	I1210 07:54:28.980794  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.980801  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:28.980808  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:28.980819  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:28.995862  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:28.995878  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:29.065674  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:29.057420   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.058224   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.059795   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.060097   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.061321   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:29.057420   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.058224   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.059795   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.060097   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.061321   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:29.065683  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:29.065695  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:29.133594  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:29.133615  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:29.165522  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:29.165539  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:31.733707  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:31.743741  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:31.743803  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:31.768618  418823 cri.go:89] found id: ""
	I1210 07:54:31.768633  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.768647  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:31.768652  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:31.768712  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:31.797641  418823 cri.go:89] found id: ""
	I1210 07:54:31.797656  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.797663  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:31.797668  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:31.797729  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:31.823152  418823 cri.go:89] found id: ""
	I1210 07:54:31.823166  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.823174  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:31.823178  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:31.823241  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:31.849644  418823 cri.go:89] found id: ""
	I1210 07:54:31.849659  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.849666  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:31.849671  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:31.849735  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:31.877522  418823 cri.go:89] found id: ""
	I1210 07:54:31.877545  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.877553  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:31.877558  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:31.877625  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:31.903129  418823 cri.go:89] found id: ""
	I1210 07:54:31.903142  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.903150  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:31.903155  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:31.903212  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:31.941362  418823 cri.go:89] found id: ""
	I1210 07:54:31.941376  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.941383  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:31.941391  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:31.941402  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:32.025544  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:32.025566  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:32.040949  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:32.040969  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:32.110721  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:32.101989   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.102773   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.104458   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.104789   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.106701   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:32.101989   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.102773   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.104458   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.104789   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.106701   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:32.110732  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:32.110743  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:32.178647  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:32.178670  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:34.707070  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:34.717245  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:34.717310  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:34.745693  418823 cri.go:89] found id: ""
	I1210 07:54:34.745707  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.745714  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:34.745726  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:34.745790  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:34.771395  418823 cri.go:89] found id: ""
	I1210 07:54:34.771409  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.771416  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:34.771421  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:34.771479  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:34.797775  418823 cri.go:89] found id: ""
	I1210 07:54:34.797788  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.797796  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:34.797801  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:34.797861  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:34.825083  418823 cri.go:89] found id: ""
	I1210 07:54:34.825100  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.825107  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:34.825112  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:34.825177  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:34.850864  418823 cri.go:89] found id: ""
	I1210 07:54:34.850879  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.850896  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:34.850901  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:34.850975  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:34.875132  418823 cri.go:89] found id: ""
	I1210 07:54:34.875146  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.875154  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:34.875159  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:34.875227  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:34.899938  418823 cri.go:89] found id: ""
	I1210 07:54:34.899953  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.899970  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:34.899979  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:34.899990  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:34.923898  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:34.923916  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:35.004342  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:34.994993   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.995801   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.997355   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.997871   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.999627   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:34.994993   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.995801   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.997355   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.997871   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.999627   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:35.004372  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:35.004385  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:35.076257  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:35.076279  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:35.104842  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:35.104858  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:37.672039  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:37.681946  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:37.682009  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:37.706328  418823 cri.go:89] found id: ""
	I1210 07:54:37.706342  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.706349  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:37.706354  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:37.706420  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:37.731157  418823 cri.go:89] found id: ""
	I1210 07:54:37.731171  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.731179  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:37.731183  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:37.731243  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:37.756672  418823 cri.go:89] found id: ""
	I1210 07:54:37.756686  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.756693  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:37.756698  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:37.756758  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:37.782323  418823 cri.go:89] found id: ""
	I1210 07:54:37.782337  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.782344  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:37.782349  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:37.782407  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:37.809398  418823 cri.go:89] found id: ""
	I1210 07:54:37.809411  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.809425  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:37.809430  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:37.809488  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:37.834279  418823 cri.go:89] found id: ""
	I1210 07:54:37.834300  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.834307  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:37.834311  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:37.834378  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:37.860329  418823 cri.go:89] found id: ""
	I1210 07:54:37.860343  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.860351  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:37.860359  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:37.860369  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:37.933541  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:37.923454   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.924390   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.926115   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.926751   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.928659   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:37.923454   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.924390   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.926115   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.926751   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.928659   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:37.933553  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:37.933564  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:38.012971  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:38.012996  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:38.049266  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:38.049284  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:38.124985  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:38.125006  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:40.640115  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:40.651783  418823 kubeadm.go:602] duration metric: took 4m3.269334188s to restartPrimaryControlPlane
	W1210 07:54:40.651842  418823 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 07:54:40.651915  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 07:54:41.061132  418823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:54:41.073851  418823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:54:41.081733  418823 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:54:41.081788  418823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:54:41.089443  418823 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:54:41.089453  418823 kubeadm.go:158] found existing configuration files:
	
	I1210 07:54:41.089505  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 07:54:41.097510  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:54:41.097570  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:54:41.105078  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 07:54:41.112622  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:54:41.112682  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:54:41.120112  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 07:54:41.127831  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:54:41.127887  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:54:41.135843  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 07:54:41.143605  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:54:41.143662  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:54:41.150893  418823 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:54:41.188283  418823 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:54:41.188576  418823 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:54:41.266308  418823 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:54:41.266369  418823 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:54:41.266407  418823 kubeadm.go:319] OS: Linux
	I1210 07:54:41.266448  418823 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:54:41.266493  418823 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:54:41.266536  418823 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:54:41.266581  418823 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:54:41.266627  418823 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:54:41.266672  418823 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:54:41.266714  418823 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:54:41.266758  418823 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:54:41.266801  418823 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:54:41.327793  418823 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:54:41.327890  418823 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:54:41.327975  418823 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:54:41.335492  418823 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:54:41.340870  418823 out.go:252]   - Generating certificates and keys ...
	I1210 07:54:41.340961  418823 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:54:41.341031  418823 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:54:41.341119  418823 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:54:41.341186  418823 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:54:41.341262  418823 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:54:41.341320  418823 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:54:41.341398  418823 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:54:41.341465  418823 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:54:41.341545  418823 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:54:41.341622  418823 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:54:41.341659  418823 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:54:41.341719  418823 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:54:41.831104  418823 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:54:41.953522  418823 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:54:42.205323  418823 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:54:42.449785  418823 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:54:42.618213  418823 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:54:42.619047  418823 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:54:42.621575  418823 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:54:42.624790  418823 out.go:252]   - Booting up control plane ...
	I1210 07:54:42.624883  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:54:42.624959  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:54:42.625035  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:54:42.639751  418823 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:54:42.639880  418823 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:54:42.648702  418823 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:54:42.648797  418823 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:54:42.648841  418823 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:54:42.779710  418823 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:54:42.779857  418823 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:58:42.778273  418823 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000214333s
	I1210 07:58:42.778318  418823 kubeadm.go:319] 
	I1210 07:58:42.778386  418823 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:58:42.778418  418823 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:58:42.778523  418823 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:58:42.778528  418823 kubeadm.go:319] 
	I1210 07:58:42.778632  418823 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:58:42.778679  418823 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:58:42.778709  418823 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:58:42.778712  418823 kubeadm.go:319] 
	I1210 07:58:42.783355  418823 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:58:42.783807  418823 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:58:42.783918  418823 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:58:42.784153  418823 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:58:42.784159  418823 kubeadm.go:319] 
	I1210 07:58:42.784227  418823 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:58:42.784352  418823 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000214333s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:58:42.784459  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 07:58:43.198112  418823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:58:43.211996  418823 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:58:43.212056  418823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:58:43.219732  418823 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:58:43.219740  418823 kubeadm.go:158] found existing configuration files:
	
	I1210 07:58:43.219791  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 07:58:43.228096  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:58:43.228153  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:58:43.235851  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 07:58:43.244105  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:58:43.244161  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:58:43.252172  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 07:58:43.259776  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:58:43.259838  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:58:43.267182  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 07:58:43.274881  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:58:43.274939  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:58:43.282494  418823 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:58:43.323208  418823 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:58:43.323257  418823 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:58:43.392495  418823 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:58:43.392566  418823 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:58:43.392605  418823 kubeadm.go:319] OS: Linux
	I1210 07:58:43.392653  418823 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:58:43.392700  418823 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:58:43.392753  418823 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:58:43.392806  418823 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:58:43.392856  418823 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:58:43.392902  418823 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:58:43.392950  418823 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:58:43.392997  418823 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:58:43.393041  418823 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:58:43.459397  418823 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:58:43.459500  418823 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:58:43.459594  418823 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:58:43.467473  418823 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:58:43.472849  418823 out.go:252]   - Generating certificates and keys ...
	I1210 07:58:43.472935  418823 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:58:43.472999  418823 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:58:43.473075  418823 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:58:43.473135  418823 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:58:43.473203  418823 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:58:43.473256  418823 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:58:43.473324  418823 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:58:43.473385  418823 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:58:43.474012  418823 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:58:43.474414  418823 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:58:43.474604  418823 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:58:43.474667  418823 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:58:43.690916  418823 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:58:43.922489  418823 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:58:44.055635  418823 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:58:44.187430  418823 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:58:44.228570  418823 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:58:44.229295  418823 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:58:44.233140  418823 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:58:44.236201  418823 out.go:252]   - Booting up control plane ...
	I1210 07:58:44.236295  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:58:44.236371  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:58:44.236933  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:58:44.251863  418823 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:58:44.251964  418823 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:58:44.259287  418823 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:58:44.259598  418823 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:58:44.259801  418823 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:58:44.391514  418823 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:58:44.391627  418823 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 08:02:44.389879  418823 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00019224s
	I1210 08:02:44.389912  418823 kubeadm.go:319] 
	I1210 08:02:44.389980  418823 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 08:02:44.390013  418823 kubeadm.go:319] 	- The kubelet is not running
	I1210 08:02:44.390123  418823 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 08:02:44.390155  418823 kubeadm.go:319] 
	I1210 08:02:44.390271  418823 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 08:02:44.390303  418823 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 08:02:44.390331  418823 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 08:02:44.390335  418823 kubeadm.go:319] 
	I1210 08:02:44.395328  418823 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 08:02:44.395720  418823 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 08:02:44.395823  418823 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 08:02:44.396068  418823 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 08:02:44.396072  418823 kubeadm.go:319] 
	I1210 08:02:44.396138  418823 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 08:02:44.396188  418823 kubeadm.go:403] duration metric: took 12m7.052327562s to StartCluster
	I1210 08:02:44.396219  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:02:44.396280  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:02:44.421374  418823 cri.go:89] found id: ""
	I1210 08:02:44.421389  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.421396  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:02:44.421401  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:02:44.421463  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:02:44.447342  418823 cri.go:89] found id: ""
	I1210 08:02:44.447356  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.447363  418823 logs.go:284] No container was found matching "etcd"
	I1210 08:02:44.447368  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:02:44.447429  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:02:44.472601  418823 cri.go:89] found id: ""
	I1210 08:02:44.472614  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.472621  418823 logs.go:284] No container was found matching "coredns"
	I1210 08:02:44.472627  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:02:44.472684  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:02:44.501973  418823 cri.go:89] found id: ""
	I1210 08:02:44.501986  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.501993  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:02:44.502000  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:02:44.502059  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:02:44.527997  418823 cri.go:89] found id: ""
	I1210 08:02:44.528011  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.528018  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:02:44.528023  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:02:44.528083  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:02:44.558353  418823 cri.go:89] found id: ""
	I1210 08:02:44.558367  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.558374  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:02:44.558379  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:02:44.558439  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:02:44.583751  418823 cri.go:89] found id: ""
	I1210 08:02:44.583764  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.583771  418823 logs.go:284] No container was found matching "kindnet"
	I1210 08:02:44.583780  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 08:02:44.583792  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:02:44.598048  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:02:44.598065  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:02:44.670126  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 08:02:44.661736   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.662310   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.664031   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.664536   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.666240   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 08:02:44.661736   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.662310   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.664031   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.664536   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.666240   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:02:44.670142  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:02:44.670153  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:02:44.741133  418823 logs.go:123] Gathering logs for container status ...
	I1210 08:02:44.741153  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:02:44.768780  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 08:02:44.768797  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 08:02:44.836964  418823 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00019224s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 08:02:44.837011  418823 out.go:285] * 
	W1210 08:02:44.837080  418823 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00019224s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 08:02:44.837155  418823 out.go:285] * 
	W1210 08:02:44.839300  418823 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 08:02:44.844978  418823 out.go:203] 
	W1210 08:02:44.848781  418823 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00019224s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 08:02:44.848820  418823 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 08:02:44.848841  418823 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 08:02:44.852612  418823 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 07:50:35 functional-314220 crio[9896]: time="2025-12-10T07:50:35.928236251Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 10 07:50:35 functional-314220 crio[9896]: time="2025-12-10T07:50:35.928272445Z" level=info msg="Starting seccomp notifier watcher"
	Dec 10 07:50:35 functional-314220 crio[9896]: time="2025-12-10T07:50:35.928313939Z" level=info msg="Create NRI interface"
	Dec 10 07:50:35 functional-314220 crio[9896]: time="2025-12-10T07:50:35.928414264Z" level=info msg="built-in NRI default validator is disabled"
	Dec 10 07:50:35 functional-314220 crio[9896]: time="2025-12-10T07:50:35.928423158Z" level=info msg="runtime interface created"
	Dec 10 07:50:35 functional-314220 crio[9896]: time="2025-12-10T07:50:35.928435031Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 10 07:50:35 functional-314220 crio[9896]: time="2025-12-10T07:50:35.92844098Z" level=info msg="runtime interface starting up..."
	Dec 10 07:50:35 functional-314220 crio[9896]: time="2025-12-10T07:50:35.928453222Z" level=info msg="starting plugins..."
	Dec 10 07:50:35 functional-314220 crio[9896]: time="2025-12-10T07:50:35.928465965Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 07:50:35 functional-314220 crio[9896]: time="2025-12-10T07:50:35.928531492Z" level=info msg="No systemd watchdog enabled"
	Dec 10 07:50:35 functional-314220 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 10 07:54:41 functional-314220 crio[9896]: time="2025-12-10T07:54:41.331601646Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=62979dbb-32a0-43d5-a3b2-a98045dd82da name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:54:41 functional-314220 crio[9896]: time="2025-12-10T07:54:41.332356674Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=882162fe-73f4-4075-9551-d0a546a62bbf name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:54:41 functional-314220 crio[9896]: time="2025-12-10T07:54:41.332837779Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=07883432-643e-4682-a159-ee81c5c97128 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:54:41 functional-314220 crio[9896]: time="2025-12-10T07:54:41.333259733Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=c2e92f0d-1459-497a-8d07-d423bb265c62 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:54:41 functional-314220 crio[9896]: time="2025-12-10T07:54:41.333667081Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0e2e5041-5e30-43e4-8893-355aed834dc7 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:54:41 functional-314220 crio[9896]: time="2025-12-10T07:54:41.334042339Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=16ec4d28-0473-431c-a6c6-f756cd1ed250 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:54:41 functional-314220 crio[9896]: time="2025-12-10T07:54:41.334553221Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=945bf42d-863d-43db-9dbb-1cb7338cdf87 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.462758438Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=30754360-77fc-41d9-961a-703309105bf4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.463612109Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=ce58a9d0-ec5e-41a7-a162-73ed5f175442 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.464131886Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=af48f6b4-c6c8-458a-8d08-3443ae3e881b name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.464662517Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=4191b9f9-c176-40bc-b3bb-ec0edd3076c8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.465135606Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=eaddf230-cd32-4499-a396-5bbd1b1cb31a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.465587147Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=300cd477-ebce-4fed-8c84-bc9781d52848 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.466022016Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=1367fa60-098e-4704-b6f3-b114a75d5405 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 08:02:48.453183   21315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:48.453765   21315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:48.455429   21315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:48.456087   21315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:48.457203   21315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	[Dec10 07:11] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:23] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:25] overlayfs: idmapped layers are currently not supported
	[  +0.065949] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 07:31] overlayfs: idmapped layers are currently not supported
	[Dec10 07:32] overlayfs: idmapped layers are currently not supported
	[Dec10 07:50] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 08:02:48 up  2:45,  0 user,  load average: 0.46, 0.25, 0.50
	Linux functional-314220 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 08:02:45 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:02:46 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 10 08:02:46 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:02:46 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:02:46 functional-314220 kubelet[21189]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:02:46 functional-314220 kubelet[21189]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:02:46 functional-314220 kubelet[21189]: E1210 08:02:46.471000   21189 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:02:46 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:02:46 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:02:47 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 10 08:02:47 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:02:47 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:02:47 functional-314220 kubelet[21208]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:02:47 functional-314220 kubelet[21208]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:02:47 functional-314220 kubelet[21208]: E1210 08:02:47.158459   21208 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:02:47 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:02:47 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:02:47 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 10 08:02:47 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:02:47 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:02:47 functional-314220 kubelet[21231]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:02:47 functional-314220 kubelet[21231]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:02:47 functional-314220 kubelet[21231]: E1210 08:02:47.978039   21231 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:02:47 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:02:47 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220: exit status 2 (351.838431ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-314220" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-314220 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-314220 apply -f testdata/invalidsvc.yaml: exit status 1 (63.540617ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-314220 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-314220 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-314220 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-314220 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-314220 --alsologtostderr -v=1] stderr:
I1210 08:05:07.592965  437764 out.go:360] Setting OutFile to fd 1 ...
I1210 08:05:07.593093  437764 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 08:05:07.593104  437764 out.go:374] Setting ErrFile to fd 2...
I1210 08:05:07.593109  437764 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 08:05:07.593361  437764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
I1210 08:05:07.593641  437764 mustload.go:66] Loading cluster: functional-314220
I1210 08:05:07.594111  437764 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 08:05:07.594571  437764 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
I1210 08:05:07.610818  437764 host.go:66] Checking if "functional-314220" exists ...
I1210 08:05:07.611188  437764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1210 08:05:07.669872  437764 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 08:05:07.660416672 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1210 08:05:07.669994  437764 api_server.go:166] Checking apiserver status ...
I1210 08:05:07.670067  437764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1210 08:05:07.670120  437764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
I1210 08:05:07.688102  437764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
W1210 08:05:07.787889  437764 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1210 08:05:07.790996  437764 out.go:179] * The control-plane node functional-314220 apiserver is not running: (state=Stopped)
I1210 08:05:07.793876  437764 out.go:179]   To start a cluster, run: "minikube start -p functional-314220"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-314220
helpers_test.go:244: (dbg) docker inspect functional-314220:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	        "Created": "2025-12-10T07:35:53.582567333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 407506,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:35:53.640615938Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hostname",
	        "HostsPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hosts",
	        "LogPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30-json.log",
	        "Name": "/functional-314220",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-314220:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-314220",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	                "LowerDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35-init/diff:/var/lib/docker/overlay2/888a54fd4518421bb2e30bc2f0825f232bffa2f260991e8ba288270662a6554b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-314220",
	                "Source": "/var/lib/docker/volumes/functional-314220/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-314220",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-314220",
	                "name.minikube.sigs.k8s.io": "functional-314220",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a7384df8731cf8cd48ab1dfb3eb79fa2c44cd426e6255cad7345dab2e19d0bfd",
	            "SandboxKey": "/var/run/docker/netns/a7384df8731c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-314220": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:6b:8c:59:cb:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "854df014d6f98ea92f9938b53e3af35a6df7a455941ebb6e97efa575a4d35230",
	                    "EndpointID": "285d153b9d616bc3d729ab764c8a3d3c5b28bb20787fa60a80cbf50869c0c24b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-314220",
	                        "82cb4926192d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220: exit status 2 (339.805571ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons    │ functional-314220 addons list -o json                                                                                                               │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:04 UTC │ 10 Dec 25 08:04 UTC │
	│ ssh       │ functional-314220 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ mount     │ -p functional-314220 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo615354431/001:/mount-9p --alsologtostderr -v=1               │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ ssh       │ functional-314220 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	│ ssh       │ functional-314220 ssh -- ls -la /mount-9p                                                                                                           │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	│ ssh       │ functional-314220 ssh cat /mount-9p/test-1765353900941209775                                                                                        │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	│ ssh       │ functional-314220 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ ssh       │ functional-314220 ssh sudo umount -f /mount-9p                                                                                                      │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	│ mount     │ -p functional-314220 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1328832652/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ ssh       │ functional-314220 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ ssh       │ functional-314220 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	│ ssh       │ functional-314220 ssh -- ls -la /mount-9p                                                                                                           │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	│ ssh       │ functional-314220 ssh sudo umount -f /mount-9p                                                                                                      │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ mount     │ -p functional-314220 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2494990345/001:/mount1 --alsologtostderr -v=1                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ ssh       │ functional-314220 ssh findmnt -T /mount1                                                                                                            │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ mount     │ -p functional-314220 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2494990345/001:/mount3 --alsologtostderr -v=1                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ mount     │ -p functional-314220 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2494990345/001:/mount2 --alsologtostderr -v=1                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ ssh       │ functional-314220 ssh findmnt -T /mount1                                                                                                            │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	│ ssh       │ functional-314220 ssh findmnt -T /mount2                                                                                                            │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	│ ssh       │ functional-314220 ssh findmnt -T /mount3                                                                                                            │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	│ mount     │ -p functional-314220 --kill=true                                                                                                                    │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ start     │ -p functional-314220 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ start     │ -p functional-314220 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ start     │ -p functional-314220 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-314220 --alsologtostderr -v=1                                                                                      │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 08:05:07
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 08:05:07.394971  437710 out.go:360] Setting OutFile to fd 1 ...
	I1210 08:05:07.395185  437710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:05:07.395218  437710 out.go:374] Setting ErrFile to fd 2...
	I1210 08:05:07.395239  437710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:05:07.395879  437710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 08:05:07.396304  437710 out.go:368] Setting JSON to false
	I1210 08:05:07.397159  437710 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10058,"bootTime":1765343850,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 08:05:07.397255  437710 start.go:143] virtualization:  
	I1210 08:05:07.400545  437710 out.go:179] * [functional-314220] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1210 08:05:07.403482  437710 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 08:05:07.403567  437710 notify.go:221] Checking for updates...
	I1210 08:05:07.409228  437710 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 08:05:07.412142  437710 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 08:05:07.415611  437710 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 08:05:07.418405  437710 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 08:05:07.421215  437710 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 08:05:07.424553  437710 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 08:05:07.425232  437710 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 08:05:07.456710  437710 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 08:05:07.456836  437710 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 08:05:07.529584  437710 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 08:05:07.520295348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 08:05:07.529696  437710 docker.go:319] overlay module found
	I1210 08:05:07.532635  437710 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1210 08:05:07.535467  437710 start.go:309] selected driver: docker
	I1210 08:05:07.535486  437710 start.go:927] validating driver "docker" against &{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 08:05:07.535585  437710 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 08:05:07.539094  437710 out.go:203] 
	W1210 08:05:07.541939  437710 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 08:05:07.544735  437710 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.462758438Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=30754360-77fc-41d9-961a-703309105bf4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.463612109Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=ce58a9d0-ec5e-41a7-a162-73ed5f175442 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.464131886Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=af48f6b4-c6c8-458a-8d08-3443ae3e881b name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.464662517Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=4191b9f9-c176-40bc-b3bb-ec0edd3076c8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.465135606Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=eaddf230-cd32-4499-a396-5bbd1b1cb31a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.465587147Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=300cd477-ebce-4fed-8c84-bc9781d52848 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.466022016Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=1367fa60-098e-4704-b6f3-b114a75d5405 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.067588181Z" level=info msg="Checking image status: kicbase/echo-server:functional-314220" id=9cb916ae-b10a-45e5-9f16-e18af053130e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.067769762Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.067811068Z" level=info msg="Image kicbase/echo-server:functional-314220 not found" id=9cb916ae-b10a-45e5-9f16-e18af053130e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.067872754Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-314220 found" id=9cb916ae-b10a-45e5-9f16-e18af053130e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.095996195Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-314220" id=b1a2f38f-ca46-4e03-a345-6e3600d518ba name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.096290073Z" level=info msg="Image docker.io/kicbase/echo-server:functional-314220 not found" id=b1a2f38f-ca46-4e03-a345-6e3600d518ba name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.09635135Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-314220 found" id=b1a2f38f-ca46-4e03-a345-6e3600d518ba name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.132021615Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-314220" id=2edafe31-0e02-4fc2-9f4f-edcaf98b26aa name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.132192218Z" level=info msg="Image localhost/kicbase/echo-server:functional-314220 not found" id=2edafe31-0e02-4fc2-9f4f-edcaf98b26aa name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.13224538Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-314220 found" id=2edafe31-0e02-4fc2-9f4f-edcaf98b26aa name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.091386609Z" level=info msg="Checking image status: kicbase/echo-server:functional-314220" id=48a93da4-5ee9-4739-b4c1-47bc2a67ca0c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.091538619Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.091581639Z" level=info msg="Image kicbase/echo-server:functional-314220 not found" id=48a93da4-5ee9-4739-b4c1-47bc2a67ca0c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.091640035Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-314220 found" id=48a93da4-5ee9-4739-b4c1-47bc2a67ca0c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.133859584Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-314220" id=f78161c9-d7b6-42a0-b3d9-39f237b54b13 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.133994224Z" level=info msg="Image docker.io/kicbase/echo-server:functional-314220 not found" id=f78161c9-d7b6-42a0-b3d9-39f237b54b13 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.134034315Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-314220 found" id=f78161c9-d7b6-42a0-b3d9-39f237b54b13 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.166199113Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-314220" id=16f24585-97cc-4a0b-a37c-9ad94456e987 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 08:05:08.852947   24012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:05:08.853537   24012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:05:08.854828   24012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:05:08.855311   24012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:05:08.857044   24012 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	[Dec10 07:11] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:23] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:25] overlayfs: idmapped layers are currently not supported
	[  +0.065949] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 07:31] overlayfs: idmapped layers are currently not supported
	[Dec10 07:32] overlayfs: idmapped layers are currently not supported
	[Dec10 07:50] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 08:05:08 up  2:47,  0 user,  load average: 0.28, 0.24, 0.45
	Linux functional-314220 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 08:05:06 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:05:06 functional-314220 kubelet[23884]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:05:06 functional-314220 kubelet[23884]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:05:06 functional-314220 kubelet[23884]: E1210 08:05:06.726578   23884 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:05:06 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:05:06 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:05:07 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 510.
	Dec 10 08:05:07 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:05:07 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:05:07 functional-314220 kubelet[23897]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:05:07 functional-314220 kubelet[23897]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:05:07 functional-314220 kubelet[23897]: E1210 08:05:07.479082   23897 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:05:07 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:05:07 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:05:08 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 511.
	Dec 10 08:05:08 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:05:08 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:05:08 functional-314220 kubelet[23924]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:05:08 functional-314220 kubelet[23924]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:05:08 functional-314220 kubelet[23924]: E1210 08:05:08.245583   23924 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:05:08 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:05:08 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:05:08 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 512.
	Dec 10 08:05:08 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:05:08 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220: exit status 2 (327.652009ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-314220" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (2.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-314220 status: exit status 2 (299.115023ms)

                                                
                                                
-- stdout --
	functional-314220
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-314220 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-314220 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (317.5538ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-314220 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-314220 status -o json: exit status 2 (297.797748ms)

                                                
                                                
-- stdout --
	{"Name":"functional-314220","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-314220 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-314220
helpers_test.go:244: (dbg) docker inspect functional-314220:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	        "Created": "2025-12-10T07:35:53.582567333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 407506,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:35:53.640615938Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hostname",
	        "HostsPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hosts",
	        "LogPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30-json.log",
	        "Name": "/functional-314220",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-314220:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-314220",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	                "LowerDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35-init/diff:/var/lib/docker/overlay2/888a54fd4518421bb2e30bc2f0825f232bffa2f260991e8ba288270662a6554b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-314220",
	                "Source": "/var/lib/docker/volumes/functional-314220/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-314220",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-314220",
	                "name.minikube.sigs.k8s.io": "functional-314220",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a7384df8731cf8cd48ab1dfb3eb79fa2c44cd426e6255cad7345dab2e19d0bfd",
	            "SandboxKey": "/var/run/docker/netns/a7384df8731c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-314220": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:6b:8c:59:cb:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "854df014d6f98ea92f9938b53e3af35a6df7a455941ebb6e97efa575a4d35230",
	                    "EndpointID": "285d153b9d616bc3d729ab764c8a3d3c5b28bb20787fa60a80cbf50869c0c24b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-314220",
	                        "82cb4926192d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220: exit status 2 (321.879837ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-314220 ssh sudo cat /usr/share/ca-certificates/378528.pem                                                                                      │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ image   │ functional-314220 image ls                                                                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ ssh     │ functional-314220 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ image   │ functional-314220 image save kicbase/echo-server:functional-314220 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ ssh     │ functional-314220 ssh sudo cat /etc/ssl/certs/3785282.pem                                                                                                 │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ image   │ functional-314220 image rm kicbase/echo-server:functional-314220 --alsologtostderr                                                                        │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ ssh     │ functional-314220 ssh sudo cat /usr/share/ca-certificates/3785282.pem                                                                                     │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ image   │ functional-314220 image ls                                                                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ ssh     │ functional-314220 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ image   │ functional-314220 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ ssh     │ functional-314220 ssh sudo cat /etc/test/nested/copy/378528/hosts                                                                                         │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ image   │ functional-314220 image ls                                                                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ service │ functional-314220 service list                                                                                                                            │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │                     │
	│ image   │ functional-314220 image save --daemon kicbase/echo-server:functional-314220 --alsologtostderr                                                             │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ service │ functional-314220 service list -o json                                                                                                                    │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │                     │
	│ service │ functional-314220 service --namespace=default --https --url hello-node                                                                                    │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │                     │
	│ ssh     │ functional-314220 ssh echo hello                                                                                                                          │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:03 UTC │ 10 Dec 25 08:03 UTC │
	│ service │ functional-314220 service hello-node --url --format={{.IP}}                                                                                               │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:03 UTC │                     │
	│ ssh     │ functional-314220 ssh cat /etc/hostname                                                                                                                   │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:03 UTC │ 10 Dec 25 08:03 UTC │
	│ service │ functional-314220 service hello-node --url                                                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:03 UTC │                     │
	│ tunnel  │ functional-314220 tunnel --alsologtostderr                                                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:03 UTC │                     │
	│ tunnel  │ functional-314220 tunnel --alsologtostderr                                                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:03 UTC │                     │
	│ tunnel  │ functional-314220 tunnel --alsologtostderr                                                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:03 UTC │                     │
	│ addons  │ functional-314220 addons list                                                                                                                             │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:04 UTC │ 10 Dec 25 08:04 UTC │
	│ addons  │ functional-314220 addons list -o json                                                                                                                     │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:04 UTC │ 10 Dec 25 08:04 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:50:32
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:50:32.899349  418823 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:50:32.899467  418823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:50:32.899470  418823 out.go:374] Setting ErrFile to fd 2...
	I1210 07:50:32.899475  418823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:50:32.899728  418823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:50:32.900077  418823 out.go:368] Setting JSON to false
	I1210 07:50:32.900875  418823 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9183,"bootTime":1765343850,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:50:32.900927  418823 start.go:143] virtualization:  
	I1210 07:50:32.904391  418823 out.go:179] * [functional-314220] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:50:32.909970  418823 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:50:32.910062  418823 notify.go:221] Checking for updates...
	I1210 07:50:32.913755  418823 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:50:32.917032  418823 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:50:32.919882  418823 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 07:50:32.922630  418823 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:50:32.926514  418823 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:50:32.929831  418823 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:50:32.929952  418823 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:50:32.973254  418823 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:50:32.973375  418823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:50:33.030281  418823 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 07:50:33.020639734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:50:33.030378  418823 docker.go:319] overlay module found
	I1210 07:50:33.033510  418823 out.go:179] * Using the docker driver based on existing profile
	I1210 07:50:33.036367  418823 start.go:309] selected driver: docker
	I1210 07:50:33.036393  418823 start.go:927] validating driver "docker" against &{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:50:33.036475  418823 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:50:33.036573  418823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:50:33.101667  418823 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 07:50:33.09179395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:50:33.102098  418823 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:50:33.102120  418823 cni.go:84] Creating CNI manager for ""
	I1210 07:50:33.102171  418823 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:50:33.102212  418823 start.go:353] cluster config:
	{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:50:33.107143  418823 out.go:179] * Starting "functional-314220" primary control-plane node in "functional-314220" cluster
	I1210 07:50:33.110125  418823 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 07:50:33.113004  418823 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:50:33.115816  418823 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:50:33.115854  418823 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1210 07:50:33.115862  418823 cache.go:65] Caching tarball of preloaded images
	I1210 07:50:33.115956  418823 preload.go:238] Found /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1210 07:50:33.115966  418823 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 07:50:33.115961  418823 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:50:33.116084  418823 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/config.json ...
	I1210 07:50:33.135517  418823 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:50:33.135528  418823 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:50:33.135548  418823 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:50:33.135579  418823 start.go:360] acquireMachinesLock for functional-314220: {Name:mk24b69a1578b6f8eb2fecd1b5e1ddb99787d8b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:50:33.135644  418823 start.go:364] duration metric: took 47.935µs to acquireMachinesLock for "functional-314220"
	I1210 07:50:33.135662  418823 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:50:33.135667  418823 fix.go:54] fixHost starting: 
	I1210 07:50:33.135928  418823 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:50:33.153142  418823 fix.go:112] recreateIfNeeded on functional-314220: state=Running err=<nil>
	W1210 07:50:33.153176  418823 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:50:33.156510  418823 out.go:252] * Updating the running docker "functional-314220" container ...
	I1210 07:50:33.156542  418823 machine.go:94] provisionDockerMachine start ...
	I1210 07:50:33.156629  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:33.173363  418823 main.go:143] libmachine: Using SSH client type: native
	I1210 07:50:33.173679  418823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:50:33.173685  418823 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:50:33.306701  418823 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-314220
	
	I1210 07:50:33.306715  418823 ubuntu.go:182] provisioning hostname "functional-314220"
	I1210 07:50:33.306784  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:33.323402  418823 main.go:143] libmachine: Using SSH client type: native
	I1210 07:50:33.323703  418823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:50:33.323711  418823 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-314220 && echo "functional-314220" | sudo tee /etc/hostname
	I1210 07:50:33.463802  418823 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-314220
	
	I1210 07:50:33.463873  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:33.481663  418823 main.go:143] libmachine: Using SSH client type: native
	I1210 07:50:33.481979  418823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:50:33.481993  418823 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-314220' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-314220/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-314220' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:50:33.615371  418823 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:50:33.615387  418823 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-376671/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-376671/.minikube}
	I1210 07:50:33.615415  418823 ubuntu.go:190] setting up certificates
	I1210 07:50:33.615424  418823 provision.go:84] configureAuth start
	I1210 07:50:33.615481  418823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:50:33.633344  418823 provision.go:143] copyHostCerts
	I1210 07:50:33.633409  418823 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem, removing ...
	I1210 07:50:33.633416  418823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem
	I1210 07:50:33.633490  418823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem (1082 bytes)
	I1210 07:50:33.633597  418823 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem, removing ...
	I1210 07:50:33.633601  418823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem
	I1210 07:50:33.633627  418823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem (1123 bytes)
	I1210 07:50:33.633685  418823 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem, removing ...
	I1210 07:50:33.633688  418823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem
	I1210 07:50:33.633710  418823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem (1675 bytes)
	I1210 07:50:33.633815  418823 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem org=jenkins.functional-314220 san=[127.0.0.1 192.168.49.2 functional-314220 localhost minikube]
	I1210 07:50:33.839628  418823 provision.go:177] copyRemoteCerts
	I1210 07:50:33.839683  418823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:50:33.839721  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:33.857491  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:33.954662  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:50:33.972200  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:50:33.989946  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:50:34.010600  418823 provision.go:87] duration metric: took 395.152109ms to configureAuth
	I1210 07:50:34.010620  418823 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:50:34.010837  418823 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:50:34.010945  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.031319  418823 main.go:143] libmachine: Using SSH client type: native
	I1210 07:50:34.031635  418823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:50:34.031646  418823 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 07:50:34.394456  418823 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 07:50:34.394468  418823 machine.go:97] duration metric: took 1.237919377s to provisionDockerMachine
	I1210 07:50:34.394480  418823 start.go:293] postStartSetup for "functional-314220" (driver="docker")
	I1210 07:50:34.394492  418823 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:50:34.394553  418823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:50:34.394594  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.425725  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:34.527110  418823 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:50:34.530555  418823 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:50:34.530572  418823 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:50:34.530582  418823 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/addons for local assets ...
	I1210 07:50:34.530636  418823 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/files for local assets ...
	I1210 07:50:34.530720  418823 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> 3785282.pem in /etc/ssl/certs
	I1210 07:50:34.530798  418823 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts -> hosts in /etc/test/nested/copy/378528
	I1210 07:50:34.530841  418823 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/378528
	I1210 07:50:34.538245  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 07:50:34.555946  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts --> /etc/test/nested/copy/378528/hosts (40 bytes)
	I1210 07:50:34.573402  418823 start.go:296] duration metric: took 178.908422ms for postStartSetup
	I1210 07:50:34.573478  418823 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:50:34.573515  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.591144  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:34.684092  418823 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:50:34.688828  418823 fix.go:56] duration metric: took 1.553153828s for fixHost
	I1210 07:50:34.688843  418823 start.go:83] releasing machines lock for "functional-314220", held for 1.553192081s
	I1210 07:50:34.688922  418823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:50:34.705960  418823 ssh_runner.go:195] Run: cat /version.json
	I1210 07:50:34.705982  418823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:50:34.706002  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.706033  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.724227  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:34.734363  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:34.905519  418823 ssh_runner.go:195] Run: systemctl --version
	I1210 07:50:34.911896  418823 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 07:50:34.947949  418823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:50:34.952265  418823 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:50:34.952348  418823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:50:34.960087  418823 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:50:34.960100  418823 start.go:496] detecting cgroup driver to use...
	I1210 07:50:34.960131  418823 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:50:34.960194  418823 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:50:34.975734  418823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:50:34.988235  418823 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:50:34.988306  418823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:50:35.008024  418823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:50:35.023507  418823 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:50:35.140776  418823 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:50:35.287143  418823 docker.go:234] disabling docker service ...
	I1210 07:50:35.287205  418823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:50:35.302191  418823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:50:35.316045  418823 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:50:35.435977  418823 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:50:35.558581  418823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:50:35.570905  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:50:35.584271  418823 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 07:50:35.584341  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.593128  418823 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 07:50:35.593191  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.602242  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.611204  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.619936  418823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:50:35.627869  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.636843  418823 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.645059  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.653527  418823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:50:35.660914  418823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:50:35.668098  418823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:50:35.785150  418823 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 07:50:35.938526  418823 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 07:50:35.938594  418823 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 07:50:35.943564  418823 start.go:564] Will wait 60s for crictl version
	I1210 07:50:35.943634  418823 ssh_runner.go:195] Run: which crictl
	I1210 07:50:35.950126  418823 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:50:35.976476  418823 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 07:50:35.976565  418823 ssh_runner.go:195] Run: crio --version
	I1210 07:50:36.013250  418823 ssh_runner.go:195] Run: crio --version
	I1210 07:50:36.049514  418823 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 07:50:36.052392  418823 cli_runner.go:164] Run: docker network inspect functional-314220 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:50:36.073467  418823 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 07:50:36.080871  418823 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1210 07:50:36.083861  418823 kubeadm.go:884] updating cluster {Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:50:36.084003  418823 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:50:36.084083  418823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:50:36.122033  418823 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:50:36.122045  418823 crio.go:433] Images already preloaded, skipping extraction
	I1210 07:50:36.122104  418823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:50:36.147981  418823 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:50:36.147994  418823 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:50:36.148000  418823 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1210 07:50:36.148093  418823 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-314220 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:50:36.148179  418823 ssh_runner.go:195] Run: crio config
	I1210 07:50:36.223557  418823 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1210 07:50:36.223582  418823 cni.go:84] Creating CNI manager for ""
	I1210 07:50:36.223591  418823 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:50:36.223605  418823 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:50:36.223627  418823 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-314220 NodeName:functional-314220 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:50:36.223742  418823 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-314220"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:50:36.223809  418823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:50:36.231667  418823 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:50:36.231750  418823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:50:36.239592  418823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 07:50:36.252574  418823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:50:36.265349  418823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1210 07:50:36.278251  418823 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:50:36.281864  418823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:50:36.395980  418823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:50:36.662807  418823 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220 for IP: 192.168.49.2
	I1210 07:50:36.662818  418823 certs.go:195] generating shared ca certs ...
	I1210 07:50:36.662833  418823 certs.go:227] acquiring lock for ca certs: {Name:mk8f0c12407c084bd20b81428ac0dabca9a8cbd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:50:36.662974  418823 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key
	I1210 07:50:36.663036  418823 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key
	I1210 07:50:36.663044  418823 certs.go:257] generating profile certs ...
	I1210 07:50:36.663128  418823 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key
	I1210 07:50:36.663184  418823 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key.8ae59347
	I1210 07:50:36.663221  418823 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key
	I1210 07:50:36.663326  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem (1338 bytes)
	W1210 07:50:36.663359  418823 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528_empty.pem, impossibly tiny 0 bytes
	I1210 07:50:36.663370  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:50:36.663396  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:50:36.663419  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:50:36.663444  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem (1675 bytes)
	I1210 07:50:36.663487  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 07:50:36.664085  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:50:36.684901  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 07:50:36.704871  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:50:36.724001  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:50:36.742252  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:50:36.759395  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:50:36.776213  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:50:36.793265  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:50:36.810512  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /usr/share/ca-certificates/3785282.pem (1708 bytes)
	I1210 07:50:36.828353  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:50:36.845515  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem --> /usr/share/ca-certificates/378528.pem (1338 bytes)
	I1210 07:50:36.862765  418823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:50:36.875122  418823 ssh_runner.go:195] Run: openssl version
	I1210 07:50:36.881447  418823 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3785282.pem
	I1210 07:50:36.888818  418823 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3785282.pem /etc/ssl/certs/3785282.pem
	I1210 07:50:36.896054  418823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3785282.pem
	I1210 07:50:36.899817  418823 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 07:35 /usr/share/ca-certificates/3785282.pem
	I1210 07:50:36.899876  418823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3785282.pem
	I1210 07:50:36.940839  418823 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:50:36.948274  418823 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:50:36.955506  418823 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:50:36.963139  418823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:50:36.966818  418823 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 07:25 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:50:36.966873  418823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:50:37.008344  418823 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:50:37.018542  418823 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/378528.pem
	I1210 07:50:37.028848  418823 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/378528.pem /etc/ssl/certs/378528.pem
	I1210 07:50:37.037787  418823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378528.pem
	I1210 07:50:37.041789  418823 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 07:35 /usr/share/ca-certificates/378528.pem
	I1210 07:50:37.041883  418823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378528.pem
	I1210 07:50:37.083088  418823 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:50:37.090399  418823 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:50:37.093984  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:50:37.134711  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:50:37.175584  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:50:37.216322  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:50:37.258210  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:50:37.300727  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:50:37.343870  418823 kubeadm.go:401] StartCluster: {Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:50:37.343957  418823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:50:37.344031  418823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:50:37.373693  418823 cri.go:89] found id: ""
	I1210 07:50:37.373755  418823 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:50:37.382429  418823 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:50:37.382439  418823 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:50:37.382493  418823 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:50:37.389449  418823 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:50:37.389979  418823 kubeconfig.go:125] found "functional-314220" server: "https://192.168.49.2:8441"
	I1210 07:50:37.391548  418823 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:50:37.399103  418823 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 07:36:02.271715799 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 07:50:36.273283366 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1210 07:50:37.399128  418823 kubeadm.go:1161] stopping kube-system containers ...
	I1210 07:50:37.399140  418823 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 07:50:37.399196  418823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:50:37.434614  418823 cri.go:89] found id: ""
	I1210 07:50:37.434674  418823 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 07:50:37.455844  418823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:50:37.463706  418823 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 10 07:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 10 07:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 10 07:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 10 07:40 /etc/kubernetes/scheduler.conf
	
	I1210 07:50:37.463780  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 07:50:37.471472  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 07:50:37.478782  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:50:37.478837  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:50:37.486355  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 07:50:37.493976  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:50:37.494040  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:50:37.501640  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 07:50:37.509588  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:50:37.509645  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:50:37.517276  418823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:50:37.525049  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:37.571686  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:39.573879  418823 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.002165526s)
	I1210 07:50:39.573940  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:39.780126  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:39.857417  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:39.903067  418823 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:50:39.903139  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:40.403973  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:40.903804  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:41.403355  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:41.904207  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:42.404057  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:42.903818  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:43.404234  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:43.904036  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:44.403331  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:44.904250  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:45.404168  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:45.904093  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:46.404204  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:46.904144  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:47.404213  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:47.903250  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:48.404144  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:48.904262  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:49.404011  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:49.903321  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:50.403990  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:50.903998  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:51.403914  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:51.903990  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:52.403942  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:52.903796  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:53.403576  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:53.903966  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:54.403314  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:54.904147  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:55.404245  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:55.903953  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:56.403340  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:56.904274  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:57.404124  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:57.903801  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:58.404241  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:58.903869  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:59.403287  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:59.903954  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:00.403352  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:00.904043  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:01.403894  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:01.903648  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:02.404219  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:02.903678  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:03.403948  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:03.904224  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:04.403353  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:04.904036  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:05.404217  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:05.903272  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:06.404216  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:06.903390  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:07.403379  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:07.903804  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:08.404215  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:08.904228  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:09.404143  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:09.904284  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:10.403331  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:10.904097  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:11.404225  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:11.903848  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:12.403282  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:12.903360  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:13.403955  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:13.903329  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:14.404081  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:14.903215  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:15.403223  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:15.903728  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:16.403337  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:16.904035  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:17.403389  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:17.904062  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:18.403915  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:18.903844  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:19.403287  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:19.903456  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:20.403269  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:20.903919  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:21.403294  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:21.903959  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:22.403330  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:22.903425  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:23.403210  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:23.904289  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:24.403468  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:24.903578  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:25.403340  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:25.903276  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:26.404241  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:26.903945  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:27.404152  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:27.903337  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:28.404037  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:28.903401  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:29.403321  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:29.904015  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:30.403353  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:30.904231  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.403897  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.903428  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:32.404285  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:32.904059  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:33.403419  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:33.903340  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:34.404109  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:34.903323  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:35.404151  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:35.903331  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.403229  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.904295  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:37.403231  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:37.904159  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:38.403982  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:38.903898  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:39.403315  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:39.903344  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:39.903423  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:39.933715  418823 cri.go:89] found id: ""
	I1210 07:51:39.933730  418823 logs.go:282] 0 containers: []
	W1210 07:51:39.933737  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:39.933741  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:39.933807  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:39.959343  418823 cri.go:89] found id: ""
	I1210 07:51:39.959358  418823 logs.go:282] 0 containers: []
	W1210 07:51:39.959366  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:39.959371  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:39.959428  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:39.985280  418823 cri.go:89] found id: ""
	I1210 07:51:39.985294  418823 logs.go:282] 0 containers: []
	W1210 07:51:39.985302  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:39.985307  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:39.985366  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:40.021888  418823 cri.go:89] found id: ""
	I1210 07:51:40.021904  418823 logs.go:282] 0 containers: []
	W1210 07:51:40.021912  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:40.021917  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:40.022019  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:40.050222  418823 cri.go:89] found id: ""
	I1210 07:51:40.050238  418823 logs.go:282] 0 containers: []
	W1210 07:51:40.050245  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:40.050251  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:40.050314  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:40.076513  418823 cri.go:89] found id: ""
	I1210 07:51:40.076528  418823 logs.go:282] 0 containers: []
	W1210 07:51:40.076536  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:40.076541  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:40.076603  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:40.106190  418823 cri.go:89] found id: ""
	I1210 07:51:40.106206  418823 logs.go:282] 0 containers: []
	W1210 07:51:40.106213  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:40.106221  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:40.106232  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:40.171760  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:40.171781  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:40.188577  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:40.188594  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:40.259869  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:40.250864   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.251869   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.253556   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.254191   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.255824   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:40.250864   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.251869   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.253556   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.254191   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.255824   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:40.259893  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:40.259905  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:40.330751  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:40.330772  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:42.864666  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:42.875209  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:42.875278  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:42.906775  418823 cri.go:89] found id: ""
	I1210 07:51:42.906788  418823 logs.go:282] 0 containers: []
	W1210 07:51:42.906796  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:42.906802  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:42.906860  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:42.932120  418823 cri.go:89] found id: ""
	I1210 07:51:42.932134  418823 logs.go:282] 0 containers: []
	W1210 07:51:42.932142  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:42.932147  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:42.932207  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:42.960769  418823 cri.go:89] found id: ""
	I1210 07:51:42.960784  418823 logs.go:282] 0 containers: []
	W1210 07:51:42.960793  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:42.960798  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:42.960857  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:42.986269  418823 cri.go:89] found id: ""
	I1210 07:51:42.986285  418823 logs.go:282] 0 containers: []
	W1210 07:51:42.986294  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:42.986299  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:42.986361  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:43.021139  418823 cri.go:89] found id: ""
	I1210 07:51:43.021155  418823 logs.go:282] 0 containers: []
	W1210 07:51:43.021163  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:43.021168  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:43.021241  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:43.047486  418823 cri.go:89] found id: ""
	I1210 07:51:43.047501  418823 logs.go:282] 0 containers: []
	W1210 07:51:43.047508  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:43.047513  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:43.047576  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:43.073233  418823 cri.go:89] found id: ""
	I1210 07:51:43.073247  418823 logs.go:282] 0 containers: []
	W1210 07:51:43.073255  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:43.073263  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:43.073273  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:43.139078  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:43.139105  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:43.153579  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:43.153595  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:43.240938  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:43.232595   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.233028   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.234681   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.235233   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.236893   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:43.232595   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.233028   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.234681   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.235233   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.236893   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:43.240958  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:43.240970  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:43.308772  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:43.308794  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:45.841619  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:45.852276  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:45.852345  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:45.887199  418823 cri.go:89] found id: ""
	I1210 07:51:45.887215  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.887222  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:45.887237  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:45.887324  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:45.918859  418823 cri.go:89] found id: ""
	I1210 07:51:45.918873  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.918880  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:45.918885  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:45.918944  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:45.943991  418823 cri.go:89] found id: ""
	I1210 07:51:45.944006  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.944014  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:45.944019  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:45.944088  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:45.970351  418823 cri.go:89] found id: ""
	I1210 07:51:45.970371  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.970379  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:45.970384  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:45.970444  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:45.995587  418823 cri.go:89] found id: ""
	I1210 07:51:45.995601  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.995609  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:45.995614  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:45.995678  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:46.023570  418823 cri.go:89] found id: ""
	I1210 07:51:46.023586  418823 logs.go:282] 0 containers: []
	W1210 07:51:46.023593  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:46.023599  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:46.023660  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:46.056294  418823 cri.go:89] found id: ""
	I1210 07:51:46.056309  418823 logs.go:282] 0 containers: []
	W1210 07:51:46.056317  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:46.056325  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:46.056336  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:46.125021  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:46.125041  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:46.139709  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:46.139728  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:46.233096  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:46.221808   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.223333   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.227401   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.227730   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.229200   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:46.221808   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.223333   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.227401   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.227730   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.229200   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:46.233116  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:46.233127  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:46.302440  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:46.302460  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:48.833091  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:48.843740  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:48.843804  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:48.869041  418823 cri.go:89] found id: ""
	I1210 07:51:48.869057  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.869064  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:48.869070  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:48.869139  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:48.893750  418823 cri.go:89] found id: ""
	I1210 07:51:48.893765  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.893784  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:48.893790  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:48.893850  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:48.919315  418823 cri.go:89] found id: ""
	I1210 07:51:48.919330  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.919337  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:48.919343  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:48.919413  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:48.944091  418823 cri.go:89] found id: ""
	I1210 07:51:48.944107  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.944114  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:48.944120  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:48.944178  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:48.968980  418823 cri.go:89] found id: ""
	I1210 07:51:48.968995  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.969002  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:48.969007  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:48.969066  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:48.994258  418823 cri.go:89] found id: ""
	I1210 07:51:48.994272  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.994279  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:48.994294  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:48.994354  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:49.021988  418823 cri.go:89] found id: ""
	I1210 07:51:49.022004  418823 logs.go:282] 0 containers: []
	W1210 07:51:49.022012  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:49.022019  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:49.022029  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:49.089579  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:49.089605  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:49.118629  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:49.118648  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:49.191180  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:49.191204  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:49.208309  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:49.208325  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:49.273461  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:49.264556   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.265266   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.266981   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.267634   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.269070   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:49.264556   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.265266   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.266981   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.267634   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.269070   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:51.775168  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:51.785506  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:51.785567  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:51.810828  418823 cri.go:89] found id: ""
	I1210 07:51:51.810843  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.810860  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:51.810865  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:51.810926  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:51.835270  418823 cri.go:89] found id: ""
	I1210 07:51:51.835285  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.835292  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:51.835297  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:51.835357  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:51.862106  418823 cri.go:89] found id: ""
	I1210 07:51:51.862121  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.862129  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:51.862134  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:51.862203  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:51.887726  418823 cri.go:89] found id: ""
	I1210 07:51:51.887741  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.887749  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:51.887754  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:51.887816  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:51.916383  418823 cri.go:89] found id: ""
	I1210 07:51:51.916398  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.916405  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:51.916409  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:51.916479  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:51.945251  418823 cri.go:89] found id: ""
	I1210 07:51:51.945266  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.945273  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:51.945278  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:51.945337  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:51.970333  418823 cri.go:89] found id: ""
	I1210 07:51:51.970348  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.970357  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:51.970365  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:51.970385  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:51.998969  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:51.998986  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:52.071390  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:52.071420  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:52.087389  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:52.087406  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:52.154961  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:52.145998   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.146914   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.148561   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.149089   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.150792   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:52.145998   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.146914   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.148561   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.149089   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.150792   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:52.154973  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:52.154985  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:54.734714  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:54.745090  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:54.745151  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:54.770064  418823 cri.go:89] found id: ""
	I1210 07:51:54.770079  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.770086  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:54.770091  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:54.770149  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:54.796152  418823 cri.go:89] found id: ""
	I1210 07:51:54.796167  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.796174  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:54.796179  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:54.796241  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:54.822080  418823 cri.go:89] found id: ""
	I1210 07:51:54.822095  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.822102  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:54.822107  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:54.822175  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:54.849868  418823 cri.go:89] found id: ""
	I1210 07:51:54.849883  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.849891  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:54.849895  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:54.849951  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:54.875726  418823 cri.go:89] found id: ""
	I1210 07:51:54.875741  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.875748  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:54.875753  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:54.875815  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:54.905509  418823 cri.go:89] found id: ""
	I1210 07:51:54.905524  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.905531  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:54.905536  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:54.905595  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:54.931115  418823 cri.go:89] found id: ""
	I1210 07:51:54.931138  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.931146  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:54.931154  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:54.931164  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:54.997885  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:54.997906  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:55.030067  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:55.030094  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:55.099098  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:55.099116  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:55.113912  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:55.113934  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:55.200955  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:55.191641   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.192478   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.194192   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.195162   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.196812   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:55.191641   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.192478   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.194192   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.195162   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.196812   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:57.701770  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:57.712296  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:57.712359  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:57.742200  418823 cri.go:89] found id: ""
	I1210 07:51:57.742217  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.742225  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:57.742230  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:57.742288  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:57.770042  418823 cri.go:89] found id: ""
	I1210 07:51:57.770056  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.770063  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:57.770068  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:57.770126  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:57.795451  418823 cri.go:89] found id: ""
	I1210 07:51:57.795464  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.795471  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:57.795477  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:57.795536  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:57.823068  418823 cri.go:89] found id: ""
	I1210 07:51:57.823084  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.823091  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:57.823097  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:57.823160  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:57.849968  418823 cri.go:89] found id: ""
	I1210 07:51:57.849982  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.849998  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:57.850003  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:57.850064  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:57.877868  418823 cri.go:89] found id: ""
	I1210 07:51:57.877881  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.877889  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:57.877894  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:57.877954  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:57.903803  418823 cri.go:89] found id: ""
	I1210 07:51:57.903823  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.903830  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:57.903838  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:57.903849  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:57.970812  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:57.970831  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:57.985765  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:57.985786  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:58.070052  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:58.061598   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.062333   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.063943   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.064517   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.066053   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:58.061598   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.062333   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.063943   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.064517   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.066053   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:58.070062  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:58.070076  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:58.138971  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:58.138993  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:00.678904  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:00.689904  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:00.689965  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:00.717867  418823 cri.go:89] found id: ""
	I1210 07:52:00.717882  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.717889  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:00.717895  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:00.717960  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:00.746728  418823 cri.go:89] found id: ""
	I1210 07:52:00.746743  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.746750  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:00.746755  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:00.746815  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:00.771995  418823 cri.go:89] found id: ""
	I1210 07:52:00.772009  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.772016  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:00.772021  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:00.772084  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:00.801311  418823 cri.go:89] found id: ""
	I1210 07:52:00.801326  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.801333  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:00.801338  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:00.801400  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:00.827977  418823 cri.go:89] found id: ""
	I1210 07:52:00.827992  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.827999  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:00.828004  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:00.828064  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:00.857640  418823 cri.go:89] found id: ""
	I1210 07:52:00.857653  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.857661  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:00.857666  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:00.857723  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:00.886162  418823 cri.go:89] found id: ""
	I1210 07:52:00.886176  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.886183  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:00.886192  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:00.886203  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:00.900682  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:00.900699  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:00.962996  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:00.954850   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.955457   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.957002   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.957542   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.959068   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:00.954850   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.955457   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.957002   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.957542   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.959068   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:00.963006  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:00.963044  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:01.030923  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:01.030945  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:01.064661  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:01.064678  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:03.634114  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:03.644373  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:03.644437  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:03.670228  418823 cri.go:89] found id: ""
	I1210 07:52:03.670242  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.670250  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:03.670255  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:03.670313  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:03.697715  418823 cri.go:89] found id: ""
	I1210 07:52:03.697730  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.697737  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:03.697742  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:03.697800  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:03.725317  418823 cri.go:89] found id: ""
	I1210 07:52:03.725331  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.725338  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:03.725344  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:03.725406  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:03.754932  418823 cri.go:89] found id: ""
	I1210 07:52:03.754947  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.754954  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:03.754959  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:03.755055  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:03.781710  418823 cri.go:89] found id: ""
	I1210 07:52:03.781724  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.781731  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:03.781736  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:03.781799  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:03.806748  418823 cri.go:89] found id: ""
	I1210 07:52:03.806761  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.806769  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:03.806773  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:03.806839  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:03.831941  418823 cri.go:89] found id: ""
	I1210 07:52:03.831956  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.831963  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:03.831970  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:03.831980  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:03.893889  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:03.885770   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.886207   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.887763   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.888077   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.889552   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:03.885770   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.886207   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.887763   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.888077   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.889552   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:03.893899  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:03.893910  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:03.963740  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:03.963762  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:03.994617  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:03.994633  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:04.064848  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:04.064869  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:06.580763  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:06.590814  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:06.590876  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:06.617862  418823 cri.go:89] found id: ""
	I1210 07:52:06.617877  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.617884  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:06.617889  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:06.617952  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:06.642344  418823 cri.go:89] found id: ""
	I1210 07:52:06.642364  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.642372  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:06.642376  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:06.642434  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:06.668168  418823 cri.go:89] found id: ""
	I1210 07:52:06.668181  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.668189  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:06.668194  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:06.668252  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:06.693569  418823 cri.go:89] found id: ""
	I1210 07:52:06.693584  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.693591  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:06.693596  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:06.693655  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:06.719248  418823 cri.go:89] found id: ""
	I1210 07:52:06.719272  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.719281  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:06.719286  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:06.719353  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:06.744269  418823 cri.go:89] found id: ""
	I1210 07:52:06.744298  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.744306  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:06.744311  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:06.744384  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:06.769456  418823 cri.go:89] found id: ""
	I1210 07:52:06.769485  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.769493  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:06.769501  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:06.769520  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:06.835122  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:06.826477   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.827191   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.828919   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.829490   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.831268   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:06.826477   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.827191   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.828919   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.829490   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.831268   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:06.835134  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:06.835145  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:06.903874  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:06.903896  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:06.932245  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:06.932261  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:06.999686  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:06.999707  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:09.516631  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:09.527151  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:09.527214  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:09.553162  418823 cri.go:89] found id: ""
	I1210 07:52:09.553175  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.553182  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:09.553187  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:09.553248  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:09.577770  418823 cri.go:89] found id: ""
	I1210 07:52:09.577785  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.577792  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:09.577797  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:09.577857  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:09.603741  418823 cri.go:89] found id: ""
	I1210 07:52:09.603755  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.603765  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:09.603770  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:09.603830  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:09.631507  418823 cri.go:89] found id: ""
	I1210 07:52:09.631521  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.631529  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:09.631534  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:09.631597  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:09.657315  418823 cri.go:89] found id: ""
	I1210 07:52:09.657329  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.657342  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:09.657347  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:09.657406  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:09.682591  418823 cri.go:89] found id: ""
	I1210 07:52:09.682606  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.682613  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:09.682619  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:09.682677  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:09.708020  418823 cri.go:89] found id: ""
	I1210 07:52:09.708034  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.708042  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:09.708049  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:09.708062  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:09.777964  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:09.777985  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:09.792349  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:09.792367  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:09.854411  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:09.846045   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.846788   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.848379   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.848685   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.850192   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:09.846045   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.846788   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.848379   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.848685   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.850192   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:09.854421  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:09.854434  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:09.922233  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:09.922255  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:12.457145  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:12.468643  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:12.468721  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:12.494760  418823 cri.go:89] found id: ""
	I1210 07:52:12.494774  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.494782  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:12.494787  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:12.494853  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:12.520639  418823 cri.go:89] found id: ""
	I1210 07:52:12.520653  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.520673  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:12.520678  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:12.520738  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:12.546812  418823 cri.go:89] found id: ""
	I1210 07:52:12.546827  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.546834  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:12.546839  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:12.546899  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:12.573531  418823 cri.go:89] found id: ""
	I1210 07:52:12.573546  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.573553  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:12.573558  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:12.573623  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:12.600389  418823 cri.go:89] found id: ""
	I1210 07:52:12.600403  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.600411  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:12.600416  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:12.600475  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:12.630232  418823 cri.go:89] found id: ""
	I1210 07:52:12.630257  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.630265  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:12.630271  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:12.630340  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:12.656013  418823 cri.go:89] found id: ""
	I1210 07:52:12.656027  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.656035  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:12.656042  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:12.656058  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:12.727638  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:12.727667  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:12.742877  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:12.742895  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:12.807790  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:12.799348   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.800103   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.801856   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.802374   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.803909   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:12.799348   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.800103   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.801856   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.802374   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.803909   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:12.807802  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:12.807814  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:12.876103  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:12.876124  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:15.409499  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:15.424003  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:15.424080  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:15.458307  418823 cri.go:89] found id: ""
	I1210 07:52:15.458341  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.458348  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:15.458353  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:15.458428  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:15.488619  418823 cri.go:89] found id: ""
	I1210 07:52:15.488634  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.488641  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:15.488646  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:15.488709  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:15.513795  418823 cri.go:89] found id: ""
	I1210 07:52:15.513809  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.513817  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:15.513831  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:15.513888  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:15.539219  418823 cri.go:89] found id: ""
	I1210 07:52:15.539233  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.539240  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:15.539245  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:15.539305  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:15.565461  418823 cri.go:89] found id: ""
	I1210 07:52:15.565475  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.565490  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:15.565495  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:15.565554  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:15.597327  418823 cri.go:89] found id: ""
	I1210 07:52:15.597341  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.597348  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:15.597354  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:15.597412  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:15.622974  418823 cri.go:89] found id: ""
	I1210 07:52:15.622994  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.623001  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:15.623047  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:15.623059  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:15.690204  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:15.681087   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.681887   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.683667   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.684195   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.685805   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:15.681087   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.681887   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.683667   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.684195   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.685805   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:15.690215  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:15.690226  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:15.758230  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:15.758252  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:15.788867  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:15.788884  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:15.856134  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:15.856154  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:18.371925  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:18.382408  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:18.382482  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:18.408893  418823 cri.go:89] found id: ""
	I1210 07:52:18.408907  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.408914  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:18.408919  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:18.408994  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:18.444341  418823 cri.go:89] found id: ""
	I1210 07:52:18.444355  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.444374  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:18.444380  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:18.444450  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:18.476809  418823 cri.go:89] found id: ""
	I1210 07:52:18.476823  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.476830  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:18.476835  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:18.476892  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:18.503052  418823 cri.go:89] found id: ""
	I1210 07:52:18.503066  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.503073  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:18.503078  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:18.503150  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:18.529967  418823 cri.go:89] found id: ""
	I1210 07:52:18.529981  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.529998  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:18.530003  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:18.530095  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:18.555604  418823 cri.go:89] found id: ""
	I1210 07:52:18.555619  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.555626  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:18.555631  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:18.555692  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:18.580758  418823 cri.go:89] found id: ""
	I1210 07:52:18.580773  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.580781  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:18.580789  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:18.580803  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:18.649536  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:18.641471   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.642112   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.643861   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.644377   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.645509   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:18.641471   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.642112   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.643861   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.644377   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.645509   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:18.649546  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:18.649558  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:18.720152  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:18.720174  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:18.749804  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:18.749823  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:18.819943  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:18.819965  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:21.337138  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:21.347127  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:21.347189  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:21.373895  418823 cri.go:89] found id: ""
	I1210 07:52:21.373918  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.373926  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:21.373931  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:21.373998  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:21.399869  418823 cri.go:89] found id: ""
	I1210 07:52:21.399896  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.399903  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:21.399908  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:21.399979  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:21.427202  418823 cri.go:89] found id: ""
	I1210 07:52:21.427219  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.427226  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:21.427231  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:21.427299  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:21.458325  418823 cri.go:89] found id: ""
	I1210 07:52:21.458348  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.458355  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:21.458360  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:21.458429  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:21.488232  418823 cri.go:89] found id: ""
	I1210 07:52:21.488246  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.488253  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:21.488259  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:21.488318  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:21.523678  418823 cri.go:89] found id: ""
	I1210 07:52:21.523693  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.523700  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:21.523706  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:21.523774  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:21.554053  418823 cri.go:89] found id: ""
	I1210 07:52:21.554068  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.554076  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:21.554084  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:21.554094  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:21.584626  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:21.584643  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:21.650495  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:21.650516  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:21.665376  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:21.665393  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:21.728186  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:21.719729   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.720322   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.722158   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.722717   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.724385   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:21.719729   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.720322   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.722158   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.722717   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.724385   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:21.728197  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:21.728210  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:24.296826  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:24.306876  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:24.306941  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:24.331566  418823 cri.go:89] found id: ""
	I1210 07:52:24.331580  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.331587  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:24.331592  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:24.331654  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:24.364290  418823 cri.go:89] found id: ""
	I1210 07:52:24.364304  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.364312  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:24.364317  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:24.364375  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:24.394840  418823 cri.go:89] found id: ""
	I1210 07:52:24.394855  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.394863  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:24.394871  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:24.394927  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:24.423155  418823 cri.go:89] found id: ""
	I1210 07:52:24.423169  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.423176  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:24.423181  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:24.423237  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:24.448495  418823 cri.go:89] found id: ""
	I1210 07:52:24.448509  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.448517  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:24.448522  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:24.448582  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:24.473213  418823 cri.go:89] found id: ""
	I1210 07:52:24.473228  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.473244  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:24.473250  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:24.473311  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:24.498332  418823 cri.go:89] found id: ""
	I1210 07:52:24.498346  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.498363  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:24.498371  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:24.498386  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:24.512582  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:24.512599  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:24.576630  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:24.568982   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.569512   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.570996   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.571433   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.572843   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:24.568982   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.569512   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.570996   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.571433   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.572843   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:24.576640  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:24.576651  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:24.643309  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:24.643329  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:24.671954  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:24.671973  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:27.241302  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:27.251489  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:27.251554  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:27.276224  418823 cri.go:89] found id: ""
	I1210 07:52:27.276239  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.276247  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:27.276252  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:27.276315  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:27.302841  418823 cri.go:89] found id: ""
	I1210 07:52:27.302855  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.302862  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:27.302867  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:27.302934  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:27.329134  418823 cri.go:89] found id: ""
	I1210 07:52:27.329148  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.329155  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:27.329160  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:27.329217  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:27.355218  418823 cri.go:89] found id: ""
	I1210 07:52:27.355233  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.355240  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:27.355245  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:27.355310  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:27.380928  418823 cri.go:89] found id: ""
	I1210 07:52:27.380942  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.380948  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:27.380953  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:27.381016  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:27.405139  418823 cri.go:89] found id: ""
	I1210 07:52:27.405153  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.405160  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:27.405165  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:27.405224  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:27.434261  418823 cri.go:89] found id: ""
	I1210 07:52:27.434274  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.434281  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:27.434288  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:27.434308  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:27.512344  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:27.512364  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:27.526600  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:27.526616  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:27.593338  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:27.585550   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.586220   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.587805   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.588181   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.589629   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:27.585550   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.586220   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.587805   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.588181   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.589629   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:27.593348  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:27.593360  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:27.660306  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:27.660330  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:30.190245  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:30.200692  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:30.200762  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:30.225476  418823 cri.go:89] found id: ""
	I1210 07:52:30.225491  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.225498  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:30.225503  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:30.225561  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:30.252256  418823 cri.go:89] found id: ""
	I1210 07:52:30.252270  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.252277  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:30.252282  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:30.252339  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:30.277929  418823 cri.go:89] found id: ""
	I1210 07:52:30.277943  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.277950  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:30.277955  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:30.278013  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:30.303604  418823 cri.go:89] found id: ""
	I1210 07:52:30.303619  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.303627  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:30.303631  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:30.303695  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:30.328592  418823 cri.go:89] found id: ""
	I1210 07:52:30.328606  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.328620  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:30.328625  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:30.328683  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:30.357680  418823 cri.go:89] found id: ""
	I1210 07:52:30.357694  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.357701  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:30.357706  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:30.357772  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:30.383058  418823 cri.go:89] found id: ""
	I1210 07:52:30.383071  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.383085  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:30.383093  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:30.383103  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:30.451001  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:30.451264  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:30.466690  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:30.466709  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:30.535653  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:30.527209   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.528061   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.529830   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.530197   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.531734   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:30.527209   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.528061   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.529830   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.530197   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.531734   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:30.535662  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:30.535673  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:30.603957  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:30.603978  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:33.138030  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:33.148615  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:33.148680  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:33.174834  418823 cri.go:89] found id: ""
	I1210 07:52:33.174848  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.174855  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:33.174860  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:33.174922  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:33.205206  418823 cri.go:89] found id: ""
	I1210 07:52:33.205221  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.205228  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:33.205233  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:33.205296  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:33.235457  418823 cri.go:89] found id: ""
	I1210 07:52:33.235472  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.235480  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:33.235485  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:33.235548  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:33.260204  418823 cri.go:89] found id: ""
	I1210 07:52:33.260218  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.260225  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:33.260230  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:33.260290  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:33.285426  418823 cri.go:89] found id: ""
	I1210 07:52:33.285440  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.285448  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:33.285453  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:33.285513  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:33.310040  418823 cri.go:89] found id: ""
	I1210 07:52:33.310054  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.310068  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:33.310073  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:33.310135  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:33.334636  418823 cri.go:89] found id: ""
	I1210 07:52:33.334650  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.334658  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:33.334665  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:33.334676  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:33.400914  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:33.392977   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.393712   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.395357   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.395749   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.397284   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:33.392977   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.393712   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.395357   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.395749   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.397284   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:33.400923  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:33.400934  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:33.489102  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:33.489132  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:33.523301  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:33.523319  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:33.590429  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:33.590450  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:36.107174  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:36.117293  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:36.117353  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:36.141455  418823 cri.go:89] found id: ""
	I1210 07:52:36.141469  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.141477  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:36.141482  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:36.141541  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:36.172812  418823 cri.go:89] found id: ""
	I1210 07:52:36.172826  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.172833  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:36.172838  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:36.172901  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:36.201760  418823 cri.go:89] found id: ""
	I1210 07:52:36.201774  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.201781  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:36.201786  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:36.201845  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:36.227525  418823 cri.go:89] found id: ""
	I1210 07:52:36.227539  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.227553  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:36.227558  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:36.227617  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:36.255643  418823 cri.go:89] found id: ""
	I1210 07:52:36.255657  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.255664  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:36.255669  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:36.255729  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:36.281030  418823 cri.go:89] found id: ""
	I1210 07:52:36.281044  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.281052  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:36.281057  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:36.281115  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:36.307190  418823 cri.go:89] found id: ""
	I1210 07:52:36.307204  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.307211  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:36.307219  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:36.307231  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:36.321687  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:36.321705  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:36.383640  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:36.374708   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.375130   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.376841   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.377402   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.379213   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:36.374708   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.375130   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.376841   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.377402   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.379213   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:36.383650  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:36.383672  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:36.452123  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:36.452142  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:36.485724  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:36.485743  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:39.051733  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:39.062052  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:39.062152  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:39.086707  418823 cri.go:89] found id: ""
	I1210 07:52:39.086722  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.086729  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:39.086734  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:39.086793  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:39.111720  418823 cri.go:89] found id: ""
	I1210 07:52:39.111734  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.111742  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:39.111747  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:39.111807  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:39.135349  418823 cri.go:89] found id: ""
	I1210 07:52:39.135364  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.135371  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:39.135376  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:39.135435  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:39.160834  418823 cri.go:89] found id: ""
	I1210 07:52:39.160857  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.160865  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:39.160871  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:39.160938  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:39.189613  418823 cri.go:89] found id: ""
	I1210 07:52:39.189626  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.189634  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:39.189639  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:39.189696  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:39.214373  418823 cri.go:89] found id: ""
	I1210 07:52:39.214387  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.214394  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:39.214400  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:39.214457  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:39.239814  418823 cri.go:89] found id: ""
	I1210 07:52:39.239829  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.239837  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:39.239845  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:39.239856  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:39.304237  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:39.304257  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:39.320565  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:39.320583  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:39.389276  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:39.381128   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.381864   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.383397   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.383829   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.385337   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:39.381128   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.381864   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.383397   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.383829   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.385337   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:39.389286  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:39.389297  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:39.466908  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:39.466930  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:42.005528  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:42.023294  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:42.023367  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:42.058874  418823 cri.go:89] found id: ""
	I1210 07:52:42.058903  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.058911  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:42.058932  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:42.059040  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:42.089784  418823 cri.go:89] found id: ""
	I1210 07:52:42.089801  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.089809  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:42.089814  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:42.089881  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:42.121634  418823 cri.go:89] found id: ""
	I1210 07:52:42.121650  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.121658  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:42.121663  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:42.121737  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:42.153538  418823 cri.go:89] found id: ""
	I1210 07:52:42.153555  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.153563  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:42.153569  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:42.153644  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:42.183586  418823 cri.go:89] found id: ""
	I1210 07:52:42.183603  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.183611  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:42.183619  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:42.183688  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:42.213049  418823 cri.go:89] found id: ""
	I1210 07:52:42.213067  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.213078  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:42.213084  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:42.213165  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:42.242211  418823 cri.go:89] found id: ""
	I1210 07:52:42.242229  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.242241  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:42.242250  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:42.242268  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:42.258546  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:42.258571  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:42.332221  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:42.323428   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.324132   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.325892   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.326478   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.328088   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:42.323428   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.324132   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.325892   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.326478   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.328088   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:42.332230  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:42.332241  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:42.398832  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:42.398851  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:42.439292  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:42.439308  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:45.012889  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:45.052510  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:45.052580  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:45.096465  418823 cri.go:89] found id: ""
	I1210 07:52:45.096488  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.096496  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:45.096501  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:45.096574  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:45.131426  418823 cri.go:89] found id: ""
	I1210 07:52:45.131442  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.131450  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:45.131456  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:45.131530  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:45.179314  418823 cri.go:89] found id: ""
	I1210 07:52:45.179331  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.179340  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:45.179345  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:45.179416  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:45.224508  418823 cri.go:89] found id: ""
	I1210 07:52:45.224525  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.224534  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:45.224540  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:45.224616  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:45.259822  418823 cri.go:89] found id: ""
	I1210 07:52:45.259850  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.259859  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:45.259870  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:45.259980  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:45.289141  418823 cri.go:89] found id: ""
	I1210 07:52:45.289157  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.289164  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:45.289170  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:45.289256  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:45.317720  418823 cri.go:89] found id: ""
	I1210 07:52:45.317749  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.317764  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:45.317796  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:45.317831  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:45.385230  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:45.375821   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.376581   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.378081   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.378688   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.380382   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:45.375821   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.376581   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.378081   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.378688   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.380382   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:45.385240  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:45.385251  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:45.456646  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:45.456667  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:45.489700  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:45.489717  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:45.554187  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:45.554206  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:48.069065  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:48.079822  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:48.079950  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:48.110229  418823 cri.go:89] found id: ""
	I1210 07:52:48.110244  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.110251  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:48.110256  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:48.110317  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:48.138842  418823 cri.go:89] found id: ""
	I1210 07:52:48.138856  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.138864  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:48.138869  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:48.138928  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:48.164708  418823 cri.go:89] found id: ""
	I1210 07:52:48.164722  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.164730  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:48.164735  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:48.164793  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:48.190030  418823 cri.go:89] found id: ""
	I1210 07:52:48.190056  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.190063  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:48.190069  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:48.190160  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:48.214783  418823 cri.go:89] found id: ""
	I1210 07:52:48.214798  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.214824  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:48.214830  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:48.214899  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:48.242669  418823 cri.go:89] found id: ""
	I1210 07:52:48.242684  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.242692  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:48.242697  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:48.242758  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:48.269761  418823 cri.go:89] found id: ""
	I1210 07:52:48.269776  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.269784  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:48.269791  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:48.269802  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:48.334847  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:48.334871  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:48.349781  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:48.349796  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:48.422853  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:48.408060   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.408898   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.411067   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.411847   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.413714   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:48.408060   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.408898   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.411067   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.411847   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.413714   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:48.422867  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:48.422877  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:48.504694  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:48.504717  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:51.036528  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:51.046592  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:51.046665  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:51.073731  418823 cri.go:89] found id: ""
	I1210 07:52:51.073746  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.073753  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:51.073759  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:51.073819  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:51.100005  418823 cri.go:89] found id: ""
	I1210 07:52:51.100019  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.100027  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:51.100031  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:51.100095  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:51.125872  418823 cri.go:89] found id: ""
	I1210 07:52:51.125897  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.125905  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:51.125910  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:51.125970  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:51.151761  418823 cri.go:89] found id: ""
	I1210 07:52:51.151775  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.151783  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:51.151788  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:51.151846  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:51.178046  418823 cri.go:89] found id: ""
	I1210 07:52:51.178060  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.178068  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:51.178074  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:51.178143  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:51.205729  418823 cri.go:89] found id: ""
	I1210 07:52:51.205743  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.205750  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:51.205756  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:51.205813  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:51.231485  418823 cri.go:89] found id: ""
	I1210 07:52:51.231498  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.231505  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:51.231512  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:51.231522  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:51.295749  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:51.295769  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:51.310814  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:51.310832  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:51.374238  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:51.365806   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.366220   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.367678   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.368628   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.370186   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:51.365806   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.366220   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.367678   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.368628   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.370186   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:51.374248  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:51.374260  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:51.442190  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:51.442209  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:53.979674  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:53.989805  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:53.989873  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:54.022480  418823 cri.go:89] found id: ""
	I1210 07:52:54.022494  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.022501  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:54.022507  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:54.022571  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:54.049837  418823 cri.go:89] found id: ""
	I1210 07:52:54.049851  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.049858  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:54.049864  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:54.049924  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:54.079149  418823 cri.go:89] found id: ""
	I1210 07:52:54.079164  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.079172  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:54.079177  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:54.079244  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:54.110317  418823 cri.go:89] found id: ""
	I1210 07:52:54.110332  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.110339  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:54.110344  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:54.110401  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:54.137776  418823 cri.go:89] found id: ""
	I1210 07:52:54.137798  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.137806  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:54.137812  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:54.137873  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:54.162601  418823 cri.go:89] found id: ""
	I1210 07:52:54.162615  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.162622  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:54.162629  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:54.162690  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:54.188677  418823 cri.go:89] found id: ""
	I1210 07:52:54.188691  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.188698  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:54.188706  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:54.188720  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:54.255918  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:54.255940  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:54.270493  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:54.270513  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:54.347104  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:54.330795   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.331491   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.333174   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.333794   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.335404   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:54.330795   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.331491   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.333174   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.333794   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.335404   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:54.347114  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:54.347127  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:54.415651  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:54.415676  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:56.950504  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:56.960908  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:56.960974  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:56.986942  418823 cri.go:89] found id: ""
	I1210 07:52:56.986957  418823 logs.go:282] 0 containers: []
	W1210 07:52:56.986964  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:56.986969  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:56.987046  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:57.014060  418823 cri.go:89] found id: ""
	I1210 07:52:57.014088  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.014095  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:57.014100  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:57.014192  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:57.040046  418823 cri.go:89] found id: ""
	I1210 07:52:57.040061  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.040069  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:57.040075  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:57.040139  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:57.065400  418823 cri.go:89] found id: ""
	I1210 07:52:57.065427  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.065435  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:57.065441  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:57.065511  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:57.094105  418823 cri.go:89] found id: ""
	I1210 07:52:57.094127  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.094135  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:57.094140  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:57.094203  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:57.120409  418823 cri.go:89] found id: ""
	I1210 07:52:57.120425  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.120432  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:57.120438  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:57.120498  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:57.146119  418823 cri.go:89] found id: ""
	I1210 07:52:57.146134  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.146142  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:57.146150  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:57.146160  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:57.160510  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:57.160526  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:57.225221  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:57.216448   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.217062   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.218858   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.219548   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.221235   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:57.216448   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.217062   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.218858   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.219548   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.221235   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:57.225232  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:57.225253  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:57.293765  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:57.293785  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:57.326044  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:57.326061  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:59.896294  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:59.906460  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:59.906522  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:59.930908  418823 cri.go:89] found id: ""
	I1210 07:52:59.930922  418823 logs.go:282] 0 containers: []
	W1210 07:52:59.930930  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:59.930935  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:59.930999  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:59.956028  418823 cri.go:89] found id: ""
	I1210 07:52:59.956042  418823 logs.go:282] 0 containers: []
	W1210 07:52:59.956049  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:59.956054  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:59.956120  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:59.981032  418823 cri.go:89] found id: ""
	I1210 07:52:59.981046  418823 logs.go:282] 0 containers: []
	W1210 07:52:59.981053  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:59.981058  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:59.981116  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:00.027952  418823 cri.go:89] found id: ""
	I1210 07:53:00.027967  418823 logs.go:282] 0 containers: []
	W1210 07:53:00.027975  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:00.027981  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:00.028053  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:00.149242  418823 cri.go:89] found id: ""
	I1210 07:53:00.149275  418823 logs.go:282] 0 containers: []
	W1210 07:53:00.149301  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:00.149308  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:00.149381  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:00.205658  418823 cri.go:89] found id: ""
	I1210 07:53:00.205676  418823 logs.go:282] 0 containers: []
	W1210 07:53:00.205684  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:00.205691  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:00.205842  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:00.272868  418823 cri.go:89] found id: ""
	I1210 07:53:00.272884  418823 logs.go:282] 0 containers: []
	W1210 07:53:00.272892  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:00.272901  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:00.272914  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:00.364734  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:00.349008   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.349679   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.351763   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.352235   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.354014   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:00.349008   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.349679   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.351763   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.352235   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.354014   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:00.364745  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:00.364757  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:00.441561  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:00.441581  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:00.486703  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:00.486722  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:00.551636  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:00.551658  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:03.068015  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:03.078410  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:03.078481  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:03.103362  418823 cri.go:89] found id: ""
	I1210 07:53:03.103378  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.103385  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:03.103391  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:03.103451  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:03.129650  418823 cri.go:89] found id: ""
	I1210 07:53:03.129668  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.129676  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:03.129681  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:03.129753  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:03.156057  418823 cri.go:89] found id: ""
	I1210 07:53:03.156072  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.156079  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:03.156085  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:03.156143  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:03.181869  418823 cri.go:89] found id: ""
	I1210 07:53:03.181895  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.181903  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:03.181908  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:03.181976  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:03.210043  418823 cri.go:89] found id: ""
	I1210 07:53:03.210056  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.210064  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:03.210069  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:03.210148  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:03.234991  418823 cri.go:89] found id: ""
	I1210 07:53:03.235006  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.235046  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:03.235051  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:03.235119  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:03.261578  418823 cri.go:89] found id: ""
	I1210 07:53:03.261605  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.261612  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:03.261620  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:03.261630  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:03.326335  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:03.326355  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:03.340836  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:03.340853  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:03.407609  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:03.398750   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.399755   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.401402   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.401872   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.403636   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:03.398750   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.399755   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.401402   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.401872   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.403636   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:03.407623  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:03.407637  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:03.494941  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:03.494964  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:06.031492  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:06.042260  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:06.042330  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:06.069383  418823 cri.go:89] found id: ""
	I1210 07:53:06.069398  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.069405  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:06.069410  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:06.069471  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:06.095692  418823 cri.go:89] found id: ""
	I1210 07:53:06.095706  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.095713  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:06.095718  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:06.095783  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:06.122565  418823 cri.go:89] found id: ""
	I1210 07:53:06.122579  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.122585  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:06.122590  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:06.122647  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:06.147461  418823 cri.go:89] found id: ""
	I1210 07:53:06.147476  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.147483  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:06.147489  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:06.147549  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:06.172221  418823 cri.go:89] found id: ""
	I1210 07:53:06.172235  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.172243  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:06.172248  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:06.172306  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:06.200403  418823 cri.go:89] found id: ""
	I1210 07:53:06.200417  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.200424  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:06.200429  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:06.200487  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:06.224557  418823 cri.go:89] found id: ""
	I1210 07:53:06.224572  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.224578  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:06.224586  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:06.224597  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:06.285061  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:06.277547   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.278040   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.279487   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.279882   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.281310   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:06.277547   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.278040   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.279487   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.279882   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.281310   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:06.285071  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:06.285082  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:06.351298  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:06.351317  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:06.379592  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:06.379609  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:06.448278  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:06.448298  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:08.966418  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:08.976886  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:08.976953  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:09.010205  418823 cri.go:89] found id: ""
	I1210 07:53:09.010221  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.010248  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:09.010253  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:09.010336  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:09.039128  418823 cri.go:89] found id: ""
	I1210 07:53:09.039143  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.039150  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:09.039155  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:09.039225  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:09.066093  418823 cri.go:89] found id: ""
	I1210 07:53:09.066108  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.066116  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:09.066121  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:09.066218  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:09.091920  418823 cri.go:89] found id: ""
	I1210 07:53:09.091934  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.091948  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:09.091953  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:09.092014  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:09.118286  418823 cri.go:89] found id: ""
	I1210 07:53:09.118301  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.118309  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:09.118314  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:09.118374  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:09.143614  418823 cri.go:89] found id: ""
	I1210 07:53:09.143628  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.143635  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:09.143641  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:09.143705  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:09.168425  418823 cri.go:89] found id: ""
	I1210 07:53:09.168440  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.168447  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:09.168455  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:09.168465  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:09.236920  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:09.236943  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:09.269085  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:09.269103  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:09.339867  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:09.339886  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:09.354523  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:09.354541  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:09.432066  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:09.420805   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.421538   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.423585   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.424589   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.425304   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:09.420805   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.421538   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.423585   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.424589   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.425304   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:11.933763  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:11.943879  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:11.943943  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:11.969555  418823 cri.go:89] found id: ""
	I1210 07:53:11.969578  418823 logs.go:282] 0 containers: []
	W1210 07:53:11.969586  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:11.969591  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:11.969663  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:11.997107  418823 cri.go:89] found id: ""
	I1210 07:53:11.997121  418823 logs.go:282] 0 containers: []
	W1210 07:53:11.997128  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:11.997133  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:11.997198  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:12.025616  418823 cri.go:89] found id: ""
	I1210 07:53:12.025630  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.025638  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:12.025644  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:12.025712  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:12.052893  418823 cri.go:89] found id: ""
	I1210 07:53:12.052906  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.052914  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:12.052919  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:12.052983  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:12.077956  418823 cri.go:89] found id: ""
	I1210 07:53:12.077979  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.077988  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:12.077993  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:12.078064  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:12.104169  418823 cri.go:89] found id: ""
	I1210 07:53:12.104183  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.104200  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:12.104207  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:12.104278  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:12.130790  418823 cri.go:89] found id: ""
	I1210 07:53:12.130804  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.130812  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:12.130819  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:12.130831  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:12.194759  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:12.194778  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:12.209969  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:12.209985  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:12.272708  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:12.265266   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.265649   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.267247   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.267585   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.269013   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:12.265266   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.265649   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.267247   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.267585   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.269013   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:12.272718  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:12.272730  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:12.339739  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:12.339759  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:14.870834  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:14.882996  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:14.883096  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:14.912032  418823 cri.go:89] found id: ""
	I1210 07:53:14.912046  418823 logs.go:282] 0 containers: []
	W1210 07:53:14.912053  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:14.912059  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:14.912116  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:14.937034  418823 cri.go:89] found id: ""
	I1210 07:53:14.937048  418823 logs.go:282] 0 containers: []
	W1210 07:53:14.937056  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:14.937061  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:14.937122  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:14.962165  418823 cri.go:89] found id: ""
	I1210 07:53:14.962180  418823 logs.go:282] 0 containers: []
	W1210 07:53:14.962187  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:14.962192  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:14.962256  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:14.987169  418823 cri.go:89] found id: ""
	I1210 07:53:14.987182  418823 logs.go:282] 0 containers: []
	W1210 07:53:14.987190  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:14.987194  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:14.987250  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:15.026690  418823 cri.go:89] found id: ""
	I1210 07:53:15.026706  418823 logs.go:282] 0 containers: []
	W1210 07:53:15.026714  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:15.026719  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:15.026788  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:15.057882  418823 cri.go:89] found id: ""
	I1210 07:53:15.057896  418823 logs.go:282] 0 containers: []
	W1210 07:53:15.057903  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:15.057908  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:15.057977  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:15.084042  418823 cri.go:89] found id: ""
	I1210 07:53:15.084057  418823 logs.go:282] 0 containers: []
	W1210 07:53:15.084064  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:15.084072  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:15.084082  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:15.114864  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:15.114880  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:15.179901  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:15.179922  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:15.194821  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:15.194838  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:15.259725  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:15.250726   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.251670   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.253338   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.254117   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.255652   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:15.250726   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.251670   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.253338   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.254117   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.255652   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:15.259735  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:15.259747  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:17.826809  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:17.837193  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:17.837254  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:17.863390  418823 cri.go:89] found id: ""
	I1210 07:53:17.863404  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.863411  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:17.863416  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:17.863481  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:17.893221  418823 cri.go:89] found id: ""
	I1210 07:53:17.893236  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.893243  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:17.893248  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:17.893306  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:17.921130  418823 cri.go:89] found id: ""
	I1210 07:53:17.921155  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.921163  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:17.921168  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:17.921236  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:17.945888  418823 cri.go:89] found id: ""
	I1210 07:53:17.945901  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.945909  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:17.945914  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:17.945972  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:17.970988  418823 cri.go:89] found id: ""
	I1210 07:53:17.971002  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.971022  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:17.971027  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:17.971097  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:17.996399  418823 cri.go:89] found id: ""
	I1210 07:53:17.996413  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.996420  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:17.996425  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:17.996494  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:18.023886  418823 cri.go:89] found id: ""
	I1210 07:53:18.023900  418823 logs.go:282] 0 containers: []
	W1210 07:53:18.023908  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:18.023931  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:18.023947  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:18.090117  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:18.090136  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:18.105261  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:18.105280  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:18.174300  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:18.166057   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.166727   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.168471   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.168892   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.170326   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:18.166057   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.166727   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.168471   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.168892   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.170326   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:18.174310  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:18.174322  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:18.241759  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:18.241779  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:20.779144  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:20.788940  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:20.788999  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:20.814543  418823 cri.go:89] found id: ""
	I1210 07:53:20.814557  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.814564  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:20.814569  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:20.814634  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:20.839723  418823 cri.go:89] found id: ""
	I1210 07:53:20.839737  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.839744  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:20.839749  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:20.839808  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:20.869222  418823 cri.go:89] found id: ""
	I1210 07:53:20.869237  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.869244  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:20.869249  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:20.869310  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:20.893562  418823 cri.go:89] found id: ""
	I1210 07:53:20.893576  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.893593  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:20.893598  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:20.893664  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:20.919439  418823 cri.go:89] found id: ""
	I1210 07:53:20.919454  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.919461  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:20.919466  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:20.919526  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:20.947602  418823 cri.go:89] found id: ""
	I1210 07:53:20.947617  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.947624  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:20.947629  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:20.947688  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:20.976621  418823 cri.go:89] found id: ""
	I1210 07:53:20.976635  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.976642  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:20.976650  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:20.976666  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:21.040860  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:21.040884  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:21.055749  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:21.055767  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:21.122414  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:21.114763   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.115197   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.116691   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.116998   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.118463   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:21.114763   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.115197   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.116691   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.116998   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.118463   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:21.122458  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:21.122468  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:21.188312  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:21.188333  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:23.717609  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:23.730817  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:23.730882  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:23.756488  418823 cri.go:89] found id: ""
	I1210 07:53:23.756504  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.756512  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:23.756518  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:23.756584  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:23.782540  418823 cri.go:89] found id: ""
	I1210 07:53:23.782555  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.782562  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:23.782567  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:23.782626  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:23.807181  418823 cri.go:89] found id: ""
	I1210 07:53:23.807195  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.807204  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:23.807209  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:23.807273  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:23.831876  418823 cri.go:89] found id: ""
	I1210 07:53:23.831891  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.831900  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:23.831905  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:23.831964  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:23.858557  418823 cri.go:89] found id: ""
	I1210 07:53:23.858572  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.858580  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:23.858585  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:23.858646  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:23.883797  418823 cri.go:89] found id: ""
	I1210 07:53:23.883811  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.883820  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:23.883825  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:23.883922  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:23.913668  418823 cri.go:89] found id: ""
	I1210 07:53:23.913682  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.913690  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:23.913698  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:23.913709  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:23.977126  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:23.968779   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.969424   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.971050   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.971601   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.973232   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:23.968779   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.969424   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.971050   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.971601   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.973232   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:23.977136  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:23.977147  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:24.045089  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:24.045110  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:24.076143  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:24.076161  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:24.142779  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:24.142798  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:26.658408  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:26.669312  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:26.669374  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:26.697592  418823 cri.go:89] found id: ""
	I1210 07:53:26.697607  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.697615  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:26.697621  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:26.697687  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:26.725323  418823 cri.go:89] found id: ""
	I1210 07:53:26.725363  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.725370  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:26.725375  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:26.725433  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:26.754039  418823 cri.go:89] found id: ""
	I1210 07:53:26.754053  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.754060  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:26.754066  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:26.754122  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:26.788322  418823 cri.go:89] found id: ""
	I1210 07:53:26.788337  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.788344  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:26.788349  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:26.788408  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:26.818143  418823 cri.go:89] found id: ""
	I1210 07:53:26.818157  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.818180  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:26.818185  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:26.818246  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:26.845686  418823 cri.go:89] found id: ""
	I1210 07:53:26.845699  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.845707  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:26.845714  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:26.845772  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:26.871522  418823 cri.go:89] found id: ""
	I1210 07:53:26.871536  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.871544  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:26.871552  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:26.871568  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:26.902527  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:26.902544  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:26.967583  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:26.967603  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:26.982258  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:26.982275  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:27.053700  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:27.045382   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.046023   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.047684   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.048159   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.049667   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:27.045382   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.046023   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.047684   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.048159   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.049667   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:27.053710  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:27.053722  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:29.623259  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:29.633196  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:29.633265  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:29.658246  418823 cri.go:89] found id: ""
	I1210 07:53:29.658271  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.658278  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:29.658283  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:29.658358  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:29.685747  418823 cri.go:89] found id: ""
	I1210 07:53:29.685762  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.685769  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:29.685775  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:29.685842  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:29.721266  418823 cri.go:89] found id: ""
	I1210 07:53:29.721280  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.721288  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:29.721292  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:29.721350  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:29.746632  418823 cri.go:89] found id: ""
	I1210 07:53:29.746647  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.746655  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:29.746660  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:29.746718  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:29.771709  418823 cri.go:89] found id: ""
	I1210 07:53:29.771725  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.771732  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:29.771737  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:29.771800  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:29.801580  418823 cri.go:89] found id: ""
	I1210 07:53:29.801595  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.801602  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:29.801608  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:29.801673  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:29.827750  418823 cri.go:89] found id: ""
	I1210 07:53:29.827764  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.827771  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:29.827780  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:29.827795  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:29.893437  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:29.885346   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.885722   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.887338   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.887997   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.889654   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:29.885346   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.885722   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.887338   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.887997   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.889654   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:29.893447  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:29.893458  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:29.960399  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:29.960419  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:29.991781  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:29.991799  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:30.072819  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:30.072841  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:32.588396  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:32.598821  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:32.598882  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:32.628590  418823 cri.go:89] found id: ""
	I1210 07:53:32.628604  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.628611  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:32.628616  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:32.628678  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:32.658338  418823 cri.go:89] found id: ""
	I1210 07:53:32.658352  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.658359  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:32.658364  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:32.658424  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:32.701705  418823 cri.go:89] found id: ""
	I1210 07:53:32.701719  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.701727  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:32.701732  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:32.701792  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:32.735461  418823 cri.go:89] found id: ""
	I1210 07:53:32.735476  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.735483  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:32.735488  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:32.735548  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:32.761096  418823 cri.go:89] found id: ""
	I1210 07:53:32.761109  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.761116  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:32.761121  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:32.761180  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:32.787468  418823 cri.go:89] found id: ""
	I1210 07:53:32.787481  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.787488  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:32.787493  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:32.787553  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:32.813085  418823 cri.go:89] found id: ""
	I1210 07:53:32.813098  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.813105  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:32.813113  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:32.813123  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:32.881504  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:32.873323   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.874057   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.875714   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.876057   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.877280   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:32.873323   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.874057   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.875714   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.876057   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.877280   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:32.881541  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:32.881552  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:32.951245  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:32.951265  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:32.980096  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:32.980113  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:33.046381  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:33.046400  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:35.561454  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:35.571515  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:35.571579  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:35.596461  418823 cri.go:89] found id: ""
	I1210 07:53:35.596476  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.596483  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:35.596488  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:35.596547  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:35.623764  418823 cri.go:89] found id: ""
	I1210 07:53:35.623780  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.623787  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:35.623792  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:35.623852  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:35.649136  418823 cri.go:89] found id: ""
	I1210 07:53:35.649150  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.649159  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:35.649164  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:35.649267  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:35.689785  418823 cri.go:89] found id: ""
	I1210 07:53:35.689799  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.689806  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:35.689820  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:35.689883  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:35.717073  418823 cri.go:89] found id: ""
	I1210 07:53:35.717086  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.717104  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:35.717109  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:35.717167  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:35.747852  418823 cri.go:89] found id: ""
	I1210 07:53:35.747866  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.747874  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:35.747879  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:35.747936  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:35.772479  418823 cri.go:89] found id: ""
	I1210 07:53:35.772493  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.772500  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:35.772508  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:35.772519  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:35.843052  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:35.843075  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:35.857842  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:35.857859  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:35.927434  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:35.918703   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.919740   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.921421   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.921929   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.923509   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:35.918703   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.919740   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.921421   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.921929   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.923509   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:35.927445  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:35.927457  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:35.996278  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:35.996299  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:38.532848  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:38.543645  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:38.543706  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:38.573367  418823 cri.go:89] found id: ""
	I1210 07:53:38.573382  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.573389  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:38.573394  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:38.573456  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:38.603108  418823 cri.go:89] found id: ""
	I1210 07:53:38.603122  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.603129  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:38.603134  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:38.603193  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:38.629381  418823 cri.go:89] found id: ""
	I1210 07:53:38.629395  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.629402  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:38.629407  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:38.629467  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:38.662313  418823 cri.go:89] found id: ""
	I1210 07:53:38.662327  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.662334  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:38.662339  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:38.662402  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:38.704257  418823 cri.go:89] found id: ""
	I1210 07:53:38.704271  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.704279  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:38.704284  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:38.704346  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:38.734287  418823 cri.go:89] found id: ""
	I1210 07:53:38.734302  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.734309  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:38.734315  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:38.734375  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:38.760452  418823 cri.go:89] found id: ""
	I1210 07:53:38.760467  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.760474  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:38.760483  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:38.760493  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:38.827227  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:38.827248  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:38.841994  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:38.842011  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:38.909535  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:38.899398   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.900949   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.901647   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.903592   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.904308   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:38.899398   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.900949   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.901647   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.903592   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.904308   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:38.909548  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:38.909559  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:38.977890  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:38.977912  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:41.514495  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:41.524880  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:41.524939  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:41.550178  418823 cri.go:89] found id: ""
	I1210 07:53:41.550208  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.550216  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:41.550220  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:41.550289  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:41.578068  418823 cri.go:89] found id: ""
	I1210 07:53:41.578090  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.578097  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:41.578102  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:41.578175  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:41.603754  418823 cri.go:89] found id: ""
	I1210 07:53:41.603768  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.603776  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:41.603782  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:41.603840  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:41.628986  418823 cri.go:89] found id: ""
	I1210 07:53:41.629000  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.629008  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:41.629013  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:41.629072  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:41.654287  418823 cri.go:89] found id: ""
	I1210 07:53:41.654302  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.654309  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:41.654314  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:41.654384  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:41.688416  418823 cri.go:89] found id: ""
	I1210 07:53:41.688430  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.688437  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:41.688442  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:41.688498  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:41.713499  418823 cri.go:89] found id: ""
	I1210 07:53:41.713513  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.713521  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:41.713528  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:41.713538  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:41.730410  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:41.730426  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:41.799336  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:41.790498   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.791386   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.793149   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.793773   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.794989   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:41.790498   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.791386   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.793149   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.793773   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.794989   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:41.799346  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:41.799357  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:41.867347  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:41.867369  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:41.895652  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:41.895669  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:44.462932  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:44.472795  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:44.472854  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:44.504932  418823 cri.go:89] found id: ""
	I1210 07:53:44.504947  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.504960  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:44.504965  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:44.505025  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:44.535103  418823 cri.go:89] found id: ""
	I1210 07:53:44.535125  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.535133  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:44.535138  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:44.535204  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:44.560225  418823 cri.go:89] found id: ""
	I1210 07:53:44.560239  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.560247  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:44.560252  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:44.560310  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:44.585575  418823 cri.go:89] found id: ""
	I1210 07:53:44.585597  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.585604  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:44.585609  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:44.585668  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:44.611737  418823 cri.go:89] found id: ""
	I1210 07:53:44.611751  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.611758  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:44.611763  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:44.611824  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:44.636495  418823 cri.go:89] found id: ""
	I1210 07:53:44.636510  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.636517  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:44.636522  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:44.636580  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:44.665441  418823 cri.go:89] found id: ""
	I1210 07:53:44.665455  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.665463  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:44.665471  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:44.665481  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:44.702032  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:44.702048  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:44.776362  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:44.776383  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:44.792240  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:44.792256  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:44.854270  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:44.846404   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.846945   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.848504   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.848953   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.850457   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:44.846404   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.846945   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.848504   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.848953   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.850457   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:44.854279  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:44.854291  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:47.423978  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:47.436858  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:47.436919  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:47.461997  418823 cri.go:89] found id: ""
	I1210 07:53:47.462011  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.462018  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:47.462023  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:47.462125  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:47.487419  418823 cri.go:89] found id: ""
	I1210 07:53:47.487434  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.487441  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:47.487446  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:47.487504  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:47.512823  418823 cri.go:89] found id: ""
	I1210 07:53:47.512837  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.512845  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:47.512850  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:47.512913  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:47.538819  418823 cri.go:89] found id: ""
	I1210 07:53:47.538833  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.538840  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:47.538845  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:47.538903  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:47.563454  418823 cri.go:89] found id: ""
	I1210 07:53:47.563468  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.563476  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:47.563481  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:47.563544  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:47.588347  418823 cri.go:89] found id: ""
	I1210 07:53:47.588361  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.588368  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:47.588374  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:47.588435  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:47.613835  418823 cri.go:89] found id: ""
	I1210 07:53:47.613848  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.613855  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:47.613863  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:47.613874  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:47.679468  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:47.679488  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:47.695124  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:47.695148  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:47.764330  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:47.755921   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.756582   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.758176   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.758693   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.760262   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:47.755921   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.756582   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.758176   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.758693   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.760262   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:47.764340  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:47.764350  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:47.834926  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:47.834946  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:50.366762  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:50.376894  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:50.376958  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:50.402825  418823 cri.go:89] found id: ""
	I1210 07:53:50.402839  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.402846  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:50.402851  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:50.402912  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:50.431663  418823 cri.go:89] found id: ""
	I1210 07:53:50.431677  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.431685  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:50.431690  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:50.431748  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:50.458799  418823 cri.go:89] found id: ""
	I1210 07:53:50.458813  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.458821  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:50.458826  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:50.458885  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:50.483609  418823 cri.go:89] found id: ""
	I1210 07:53:50.483623  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.483630  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:50.483635  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:50.483693  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:50.509720  418823 cri.go:89] found id: ""
	I1210 07:53:50.509735  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.509743  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:50.509748  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:50.509808  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:50.535475  418823 cri.go:89] found id: ""
	I1210 07:53:50.535489  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.535496  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:50.535501  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:50.535560  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:50.559559  418823 cri.go:89] found id: ""
	I1210 07:53:50.559572  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.559580  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:50.559587  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:50.559598  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:50.624409  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:50.624430  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:50.639099  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:50.639117  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:50.734659  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:50.717698   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.718879   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.720716   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.721025   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.727758   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:50.717698   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.718879   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.720716   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.721025   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.727758   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:50.734673  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:50.734686  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:50.801764  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:50.801789  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:53.334554  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:53.344704  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:53.344767  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:53.369027  418823 cri.go:89] found id: ""
	I1210 07:53:53.369041  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.369049  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:53.369054  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:53.369112  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:53.392884  418823 cri.go:89] found id: ""
	I1210 07:53:53.392897  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.392904  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:53.392909  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:53.392967  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:53.421604  418823 cri.go:89] found id: ""
	I1210 07:53:53.421618  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.421625  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:53.421630  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:53.421690  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:53.446954  418823 cri.go:89] found id: ""
	I1210 07:53:53.446968  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.446976  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:53.446982  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:53.447078  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:53.472681  418823 cri.go:89] found id: ""
	I1210 07:53:53.472696  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.472703  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:53.472708  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:53.472769  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:53.497847  418823 cri.go:89] found id: ""
	I1210 07:53:53.497861  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.497868  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:53.497873  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:53.497934  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:53.524109  418823 cri.go:89] found id: ""
	I1210 07:53:53.524123  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.524131  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:53.524138  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:53.524149  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:53.593506  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:53.593527  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:53.607933  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:53.607950  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:53.678735  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:53.669132   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.671186   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.671527   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.673119   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.673744   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:53.669132   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.671186   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.671527   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.673119   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.673744   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:53.678745  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:53.678755  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:53.752843  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:53.752865  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:56.287368  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:56.297545  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:56.297605  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:56.327438  418823 cri.go:89] found id: ""
	I1210 07:53:56.327452  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.327459  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:56.327465  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:56.327525  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:56.357601  418823 cri.go:89] found id: ""
	I1210 07:53:56.357616  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.357623  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:56.357627  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:56.357686  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:56.382796  418823 cri.go:89] found id: ""
	I1210 07:53:56.382810  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.382817  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:56.382822  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:56.382878  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:56.410018  418823 cri.go:89] found id: ""
	I1210 07:53:56.410032  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.410039  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:56.410050  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:56.410110  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:56.437449  418823 cri.go:89] found id: ""
	I1210 07:53:56.437472  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.437480  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:56.437485  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:56.437551  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:56.462063  418823 cri.go:89] found id: ""
	I1210 07:53:56.462077  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.462096  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:56.462102  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:56.462178  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:56.489728  418823 cri.go:89] found id: ""
	I1210 07:53:56.489743  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.489750  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:56.489757  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:56.489771  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:56.504129  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:56.504145  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:56.569498  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:56.561896   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.562381   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.563902   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.564206   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.565675   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:56.561896   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.562381   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.563902   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.564206   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.565675   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:56.569507  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:56.569518  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:56.638285  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:56.638304  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:56.676473  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:56.676490  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:59.250249  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:59.260346  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:59.260407  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:59.288615  418823 cri.go:89] found id: ""
	I1210 07:53:59.288633  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.288640  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:59.288645  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:59.288707  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:59.314559  418823 cri.go:89] found id: ""
	I1210 07:53:59.314574  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.314581  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:59.314586  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:59.314652  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:59.339212  418823 cri.go:89] found id: ""
	I1210 07:53:59.339227  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.339235  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:59.339240  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:59.339296  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:59.365478  418823 cri.go:89] found id: ""
	I1210 07:53:59.365493  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.365500  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:59.365505  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:59.365565  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:59.391116  418823 cri.go:89] found id: ""
	I1210 07:53:59.391131  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.391138  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:59.391143  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:59.391204  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:59.417133  418823 cri.go:89] found id: ""
	I1210 07:53:59.417153  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.417161  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:59.417166  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:59.417225  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:59.442940  418823 cri.go:89] found id: ""
	I1210 07:53:59.442954  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.442961  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:59.442968  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:59.442979  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:59.509257  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:59.509277  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:59.541319  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:59.541335  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:59.607451  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:59.607470  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:59.621934  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:59.621951  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:59.693437  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:59.684845   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.685597   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.687307   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.687819   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.689392   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:59.684845   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.685597   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.687307   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.687819   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.689392   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:02.193693  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:02.204795  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:02.204860  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:02.230168  418823 cri.go:89] found id: ""
	I1210 07:54:02.230185  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.230192  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:02.230198  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:02.230311  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:02.263333  418823 cri.go:89] found id: ""
	I1210 07:54:02.263349  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.263356  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:02.263361  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:02.263426  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:02.290361  418823 cri.go:89] found id: ""
	I1210 07:54:02.290376  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.290384  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:02.290388  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:02.290448  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:02.316861  418823 cri.go:89] found id: ""
	I1210 07:54:02.316875  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.316882  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:02.316894  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:02.316951  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:02.343227  418823 cri.go:89] found id: ""
	I1210 07:54:02.343242  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.343250  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:02.343255  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:02.343319  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:02.370541  418823 cri.go:89] found id: ""
	I1210 07:54:02.370555  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.370562  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:02.370567  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:02.370655  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:02.397479  418823 cri.go:89] found id: ""
	I1210 07:54:02.397493  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.397500  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:02.397508  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:02.397522  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:02.463725  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:02.463746  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:02.478295  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:02.478312  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:02.550548  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:02.542348   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.542913   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.544446   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.544888   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.546428   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:02.542348   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.542913   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.544446   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.544888   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.546428   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:02.550558  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:02.550569  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:02.620103  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:02.620125  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:05.149959  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:05.160417  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:05.160482  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:05.189797  418823 cri.go:89] found id: ""
	I1210 07:54:05.189812  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.189826  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:05.189831  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:05.189890  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:05.217788  418823 cri.go:89] found id: ""
	I1210 07:54:05.217815  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.217823  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:05.217828  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:05.217893  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:05.243664  418823 cri.go:89] found id: ""
	I1210 07:54:05.243678  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.243686  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:05.243690  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:05.243749  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:05.269052  418823 cri.go:89] found id: ""
	I1210 07:54:05.269067  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.269075  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:05.269080  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:05.269140  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:05.294538  418823 cri.go:89] found id: ""
	I1210 07:54:05.294552  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.294559  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:05.294564  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:05.294627  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:05.321865  418823 cri.go:89] found id: ""
	I1210 07:54:05.321880  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.321887  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:05.321893  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:05.321954  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:05.348181  418823 cri.go:89] found id: ""
	I1210 07:54:05.348195  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.348203  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:05.348210  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:05.348225  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:05.379036  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:05.379062  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:05.443960  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:05.443981  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:05.458603  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:05.458620  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:05.526883  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:05.519093   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.519877   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.521585   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.522069   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.523123   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:05.519093   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.519877   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.521585   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.522069   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.523123   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:05.526895  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:05.526910  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:08.095997  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:08.105932  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:08.105991  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:08.130974  418823 cri.go:89] found id: ""
	I1210 07:54:08.130988  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.130996  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:08.131001  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:08.131153  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:08.155374  418823 cri.go:89] found id: ""
	I1210 07:54:08.155388  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.155396  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:08.155401  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:08.155458  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:08.180878  418823 cri.go:89] found id: ""
	I1210 07:54:08.180892  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.180899  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:08.180904  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:08.180962  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:08.209651  418823 cri.go:89] found id: ""
	I1210 07:54:08.209664  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.209672  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:08.209676  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:08.209735  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:08.235331  418823 cri.go:89] found id: ""
	I1210 07:54:08.235344  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.235358  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:08.235362  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:08.235421  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:08.260980  418823 cri.go:89] found id: ""
	I1210 07:54:08.260995  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.261003  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:08.261008  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:08.261066  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:08.286809  418823 cri.go:89] found id: ""
	I1210 07:54:08.286824  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.286831  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:08.286838  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:08.286848  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:08.353470  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:08.353491  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:08.367911  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:08.367928  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:08.434091  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:08.426418   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.426959   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.428418   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.428869   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.430318   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:08.426418   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.426959   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.428418   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.428869   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.430318   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:08.434101  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:08.434120  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:08.502201  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:08.502221  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:11.031209  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:11.041439  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:11.041500  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:11.067253  418823 cri.go:89] found id: ""
	I1210 07:54:11.067268  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.067275  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:11.067280  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:11.067339  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:11.092951  418823 cri.go:89] found id: ""
	I1210 07:54:11.092965  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.092972  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:11.092978  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:11.093038  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:11.118430  418823 cri.go:89] found id: ""
	I1210 07:54:11.118445  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.118453  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:11.118458  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:11.118520  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:11.144820  418823 cri.go:89] found id: ""
	I1210 07:54:11.144835  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.144843  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:11.144848  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:11.144914  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:11.173374  418823 cri.go:89] found id: ""
	I1210 07:54:11.173388  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.173396  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:11.173401  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:11.173459  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:11.198352  418823 cri.go:89] found id: ""
	I1210 07:54:11.198367  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.198375  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:11.198380  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:11.198450  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:11.224536  418823 cri.go:89] found id: ""
	I1210 07:54:11.224550  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.224559  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:11.224569  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:11.224579  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:11.290262  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:11.290283  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:11.304639  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:11.304658  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:11.368924  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:11.359948   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.360716   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.362347   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.362996   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.364714   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:11.359948   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.360716   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.362347   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.362996   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.364714   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:11.368934  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:11.368944  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:11.435589  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:11.435610  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:13.966356  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:13.976957  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:13.977022  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:14.004519  418823 cri.go:89] found id: ""
	I1210 07:54:14.004536  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.004546  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:14.004551  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:14.004633  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:14.033357  418823 cri.go:89] found id: ""
	I1210 07:54:14.033372  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.033380  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:14.033385  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:14.033445  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:14.059488  418823 cri.go:89] found id: ""
	I1210 07:54:14.059510  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.059517  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:14.059522  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:14.059585  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:14.087964  418823 cri.go:89] found id: ""
	I1210 07:54:14.087987  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.087996  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:14.088002  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:14.088073  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:14.114469  418823 cri.go:89] found id: ""
	I1210 07:54:14.114483  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.114501  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:14.114507  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:14.114580  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:14.144394  418823 cri.go:89] found id: ""
	I1210 07:54:14.144408  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.144415  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:14.144420  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:14.144482  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:14.173724  418823 cri.go:89] found id: ""
	I1210 07:54:14.173746  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.173754  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:14.173762  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:14.173779  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:14.247855  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:14.239156   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.239953   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.241733   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.242311   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.243958   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:14.239156   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.239953   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.241733   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.242311   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.243958   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:14.247865  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:14.247879  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:14.317778  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:14.317798  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:14.346568  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:14.346586  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:14.412678  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:14.412697  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:16.927406  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:16.938842  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:16.938903  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:16.972184  418823 cri.go:89] found id: ""
	I1210 07:54:16.972197  418823 logs.go:282] 0 containers: []
	W1210 07:54:16.972204  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:16.972209  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:16.972268  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:16.999114  418823 cri.go:89] found id: ""
	I1210 07:54:16.999129  418823 logs.go:282] 0 containers: []
	W1210 07:54:16.999136  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:16.999141  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:16.999204  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:17.026900  418823 cri.go:89] found id: ""
	I1210 07:54:17.026913  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.026921  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:17.026926  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:17.026985  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:17.053121  418823 cri.go:89] found id: ""
	I1210 07:54:17.053135  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.053143  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:17.053148  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:17.053208  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:17.079184  418823 cri.go:89] found id: ""
	I1210 07:54:17.079198  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.079204  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:17.079209  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:17.079268  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:17.104597  418823 cri.go:89] found id: ""
	I1210 07:54:17.104611  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.104619  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:17.104624  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:17.104681  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:17.133412  418823 cri.go:89] found id: ""
	I1210 07:54:17.133426  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.133434  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:17.133441  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:17.133452  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:17.147432  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:17.147452  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:17.210612  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:17.202657   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.203522   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.205130   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.205458   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.206975   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:17.202657   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.203522   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.205130   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.205458   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.206975   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:17.210623  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:17.210634  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:17.279473  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:17.279493  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:17.307828  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:17.307852  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:19.881299  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:19.891315  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:19.891375  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:19.926287  418823 cri.go:89] found id: ""
	I1210 07:54:19.926302  418823 logs.go:282] 0 containers: []
	W1210 07:54:19.926309  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:19.926314  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:19.926373  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:19.961020  418823 cri.go:89] found id: ""
	I1210 07:54:19.961036  418823 logs.go:282] 0 containers: []
	W1210 07:54:19.961043  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:19.961048  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:19.961111  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:19.994369  418823 cri.go:89] found id: ""
	I1210 07:54:19.994383  418823 logs.go:282] 0 containers: []
	W1210 07:54:19.994390  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:19.994395  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:19.994455  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:20.028896  418823 cri.go:89] found id: ""
	I1210 07:54:20.028911  418823 logs.go:282] 0 containers: []
	W1210 07:54:20.028919  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:20.028924  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:20.028989  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:20.059934  418823 cri.go:89] found id: ""
	I1210 07:54:20.059955  418823 logs.go:282] 0 containers: []
	W1210 07:54:20.059963  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:20.060015  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:20.060093  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:20.086606  418823 cri.go:89] found id: ""
	I1210 07:54:20.086622  418823 logs.go:282] 0 containers: []
	W1210 07:54:20.086629  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:20.086635  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:20.086703  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:20.112469  418823 cri.go:89] found id: ""
	I1210 07:54:20.112486  418823 logs.go:282] 0 containers: []
	W1210 07:54:20.112496  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:20.112504  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:20.112515  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:20.176933  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:20.176953  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:20.193125  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:20.193142  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:20.257603  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:20.249385   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.249848   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.251552   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.251918   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.253488   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:20.249385   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.249848   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.251552   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.251918   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.253488   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:20.257614  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:20.257625  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:20.324617  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:20.324638  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:22.853766  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:22.864101  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:22.864164  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:22.888959  418823 cri.go:89] found id: ""
	I1210 07:54:22.888974  418823 logs.go:282] 0 containers: []
	W1210 07:54:22.888981  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:22.888986  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:22.889046  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:22.921447  418823 cri.go:89] found id: ""
	I1210 07:54:22.921460  418823 logs.go:282] 0 containers: []
	W1210 07:54:22.921468  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:22.921473  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:22.921543  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:22.955505  418823 cri.go:89] found id: ""
	I1210 07:54:22.955519  418823 logs.go:282] 0 containers: []
	W1210 07:54:22.955526  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:22.955531  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:22.955594  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:22.986982  418823 cri.go:89] found id: ""
	I1210 07:54:22.986996  418823 logs.go:282] 0 containers: []
	W1210 07:54:22.987004  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:22.987031  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:22.987094  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:23.016264  418823 cri.go:89] found id: ""
	I1210 07:54:23.016279  418823 logs.go:282] 0 containers: []
	W1210 07:54:23.016286  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:23.016291  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:23.016354  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:23.046460  418823 cri.go:89] found id: ""
	I1210 07:54:23.046474  418823 logs.go:282] 0 containers: []
	W1210 07:54:23.046482  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:23.046507  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:23.046577  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:23.074337  418823 cri.go:89] found id: ""
	I1210 07:54:23.074352  418823 logs.go:282] 0 containers: []
	W1210 07:54:23.074361  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:23.074369  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:23.074384  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:23.139358  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:23.139380  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:23.154211  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:23.154233  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:23.215488  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:23.206516   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.207119   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.208073   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.209680   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.210273   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:23.206516   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.207119   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.208073   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.209680   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.210273   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:23.215499  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:23.215512  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:23.282950  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:23.282971  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:25.812054  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:25.822192  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:25.822255  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:25.847807  418823 cri.go:89] found id: ""
	I1210 07:54:25.847822  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.847831  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:25.847836  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:25.847900  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:25.876611  418823 cri.go:89] found id: ""
	I1210 07:54:25.876626  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.876634  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:25.876638  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:25.876698  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:25.902947  418823 cri.go:89] found id: ""
	I1210 07:54:25.902961  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.902968  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:25.902973  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:25.903056  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:25.944041  418823 cri.go:89] found id: ""
	I1210 07:54:25.944055  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.944062  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:25.944068  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:25.944128  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:25.970835  418823 cri.go:89] found id: ""
	I1210 07:54:25.970849  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.970857  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:25.970862  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:25.970923  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:26.003198  418823 cri.go:89] found id: ""
	I1210 07:54:26.003214  418823 logs.go:282] 0 containers: []
	W1210 07:54:26.003222  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:26.003228  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:26.003300  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:26.032526  418823 cri.go:89] found id: ""
	I1210 07:54:26.032540  418823 logs.go:282] 0 containers: []
	W1210 07:54:26.032548  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:26.032556  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:26.032569  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:26.099635  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:26.099655  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:26.114354  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:26.114373  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:26.179258  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:26.170492   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.171349   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.173076   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.173401   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.174907   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:26.170492   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.171349   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.173076   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.173401   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.174907   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:26.179269  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:26.179281  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:26.248336  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:26.248355  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:28.782480  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:28.792391  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:28.792450  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:28.817311  418823 cri.go:89] found id: ""
	I1210 07:54:28.817325  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.817332  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:28.817338  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:28.817393  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:28.841584  418823 cri.go:89] found id: ""
	I1210 07:54:28.841597  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.841605  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:28.841609  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:28.841666  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:28.867004  418823 cri.go:89] found id: ""
	I1210 07:54:28.867040  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.867048  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:28.867052  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:28.867110  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:28.891591  418823 cri.go:89] found id: ""
	I1210 07:54:28.891604  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.891615  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:28.891621  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:28.891677  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:28.927624  418823 cri.go:89] found id: ""
	I1210 07:54:28.927637  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.927645  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:28.927650  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:28.927714  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:28.955409  418823 cri.go:89] found id: ""
	I1210 07:54:28.955423  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.955430  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:28.955435  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:28.955493  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:28.980779  418823 cri.go:89] found id: ""
	I1210 07:54:28.980794  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.980801  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:28.980808  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:28.980819  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:28.995862  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:28.995878  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:29.065674  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:29.057420   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.058224   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.059795   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.060097   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.061321   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:29.057420   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.058224   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.059795   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.060097   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.061321   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:29.065683  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:29.065695  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:29.133594  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:29.133615  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:29.165522  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:29.165539  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:31.733707  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:31.743741  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:31.743803  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:31.768618  418823 cri.go:89] found id: ""
	I1210 07:54:31.768633  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.768647  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:31.768652  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:31.768712  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:31.797641  418823 cri.go:89] found id: ""
	I1210 07:54:31.797656  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.797663  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:31.797668  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:31.797729  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:31.823152  418823 cri.go:89] found id: ""
	I1210 07:54:31.823166  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.823174  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:31.823178  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:31.823241  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:31.849644  418823 cri.go:89] found id: ""
	I1210 07:54:31.849659  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.849666  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:31.849671  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:31.849735  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:31.877522  418823 cri.go:89] found id: ""
	I1210 07:54:31.877545  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.877553  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:31.877558  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:31.877625  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:31.903129  418823 cri.go:89] found id: ""
	I1210 07:54:31.903142  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.903150  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:31.903155  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:31.903212  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:31.941362  418823 cri.go:89] found id: ""
	I1210 07:54:31.941376  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.941383  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:31.941391  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:31.941402  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:32.025544  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:32.025566  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:32.040949  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:32.040969  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:32.110721  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:32.101989   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.102773   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.104458   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.104789   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.106701   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:32.101989   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.102773   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.104458   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.104789   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.106701   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:32.110732  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:32.110743  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:32.178647  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:32.178670  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:34.707070  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:34.717245  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:34.717310  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:34.745693  418823 cri.go:89] found id: ""
	I1210 07:54:34.745707  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.745714  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:34.745726  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:34.745790  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:34.771395  418823 cri.go:89] found id: ""
	I1210 07:54:34.771409  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.771416  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:34.771421  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:34.771479  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:34.797775  418823 cri.go:89] found id: ""
	I1210 07:54:34.797788  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.797796  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:34.797801  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:34.797861  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:34.825083  418823 cri.go:89] found id: ""
	I1210 07:54:34.825100  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.825107  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:34.825112  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:34.825177  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:34.850864  418823 cri.go:89] found id: ""
	I1210 07:54:34.850879  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.850896  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:34.850901  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:34.850975  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:34.875132  418823 cri.go:89] found id: ""
	I1210 07:54:34.875146  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.875154  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:34.875159  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:34.875227  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:34.899938  418823 cri.go:89] found id: ""
	I1210 07:54:34.899953  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.899970  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:34.899979  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:34.899990  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:34.923898  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:34.923916  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:35.004342  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:34.994993   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.995801   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.997355   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.997871   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.999627   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:34.994993   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.995801   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.997355   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.997871   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.999627   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:35.004372  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:35.004385  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:35.076257  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:35.076279  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:35.104842  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:35.104858  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:37.672039  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:37.681946  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:37.682009  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:37.706328  418823 cri.go:89] found id: ""
	I1210 07:54:37.706342  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.706349  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:37.706354  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:37.706420  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:37.731157  418823 cri.go:89] found id: ""
	I1210 07:54:37.731171  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.731179  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:37.731183  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:37.731243  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:37.756672  418823 cri.go:89] found id: ""
	I1210 07:54:37.756686  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.756693  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:37.756698  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:37.756758  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:37.782323  418823 cri.go:89] found id: ""
	I1210 07:54:37.782337  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.782344  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:37.782349  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:37.782407  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:37.809398  418823 cri.go:89] found id: ""
	I1210 07:54:37.809411  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.809425  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:37.809430  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:37.809488  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:37.834279  418823 cri.go:89] found id: ""
	I1210 07:54:37.834300  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.834307  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:37.834311  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:37.834378  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:37.860329  418823 cri.go:89] found id: ""
	I1210 07:54:37.860343  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.860351  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:37.860359  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:37.860369  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:37.933541  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:37.923454   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.924390   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.926115   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.926751   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.928659   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:37.923454   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.924390   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.926115   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.926751   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.928659   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:37.933553  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:37.933564  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:38.012971  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:38.012996  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:38.049266  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:38.049284  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:38.124985  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:38.125006  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:40.640115  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:40.651783  418823 kubeadm.go:602] duration metric: took 4m3.269334188s to restartPrimaryControlPlane
	W1210 07:54:40.651842  418823 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 07:54:40.651915  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 07:54:41.061132  418823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:54:41.073851  418823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:54:41.081733  418823 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:54:41.081788  418823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:54:41.089443  418823 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:54:41.089453  418823 kubeadm.go:158] found existing configuration files:
	
	I1210 07:54:41.089505  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 07:54:41.097510  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:54:41.097570  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:54:41.105078  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 07:54:41.112622  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:54:41.112682  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:54:41.120112  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 07:54:41.127831  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:54:41.127887  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:54:41.135843  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 07:54:41.143605  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:54:41.143662  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:54:41.150893  418823 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:54:41.188283  418823 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:54:41.188576  418823 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:54:41.266308  418823 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:54:41.266369  418823 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:54:41.266407  418823 kubeadm.go:319] OS: Linux
	I1210 07:54:41.266448  418823 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:54:41.266493  418823 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:54:41.266536  418823 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:54:41.266581  418823 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:54:41.266627  418823 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:54:41.266672  418823 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:54:41.266714  418823 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:54:41.266758  418823 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:54:41.266801  418823 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:54:41.327793  418823 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:54:41.327890  418823 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:54:41.327975  418823 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:54:41.335492  418823 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:54:41.340870  418823 out.go:252]   - Generating certificates and keys ...
	I1210 07:54:41.340961  418823 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:54:41.341031  418823 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:54:41.341119  418823 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:54:41.341186  418823 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:54:41.341262  418823 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:54:41.341320  418823 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:54:41.341398  418823 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:54:41.341465  418823 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:54:41.341545  418823 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:54:41.341622  418823 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:54:41.341659  418823 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:54:41.341719  418823 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:54:41.831104  418823 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:54:41.953522  418823 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:54:42.205323  418823 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:54:42.449785  418823 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:54:42.618213  418823 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:54:42.619047  418823 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:54:42.621575  418823 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:54:42.624790  418823 out.go:252]   - Booting up control plane ...
	I1210 07:54:42.624883  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:54:42.624959  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:54:42.625035  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:54:42.639751  418823 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:54:42.639880  418823 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:54:42.648702  418823 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:54:42.648797  418823 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:54:42.648841  418823 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:54:42.779710  418823 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:54:42.779857  418823 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:58:42.778273  418823 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000214333s
	I1210 07:58:42.778318  418823 kubeadm.go:319] 
	I1210 07:58:42.778386  418823 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:58:42.778418  418823 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:58:42.778523  418823 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:58:42.778528  418823 kubeadm.go:319] 
	I1210 07:58:42.778632  418823 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:58:42.778679  418823 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:58:42.778709  418823 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:58:42.778712  418823 kubeadm.go:319] 
	I1210 07:58:42.783355  418823 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:58:42.783807  418823 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:58:42.783918  418823 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:58:42.784153  418823 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:58:42.784159  418823 kubeadm.go:319] 
	I1210 07:58:42.784227  418823 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:58:42.784352  418823 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000214333s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:58:42.784459  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 07:58:43.198112  418823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:58:43.211996  418823 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:58:43.212056  418823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:58:43.219732  418823 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:58:43.219740  418823 kubeadm.go:158] found existing configuration files:
	
	I1210 07:58:43.219791  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 07:58:43.228096  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:58:43.228153  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:58:43.235851  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 07:58:43.244105  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:58:43.244161  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:58:43.252172  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 07:58:43.259776  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:58:43.259838  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:58:43.267182  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 07:58:43.274881  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:58:43.274939  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:58:43.282494  418823 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:58:43.323208  418823 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:58:43.323257  418823 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:58:43.392495  418823 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:58:43.392566  418823 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:58:43.392605  418823 kubeadm.go:319] OS: Linux
	I1210 07:58:43.392653  418823 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:58:43.392700  418823 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:58:43.392753  418823 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:58:43.392806  418823 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:58:43.392856  418823 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:58:43.392902  418823 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:58:43.392950  418823 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:58:43.392997  418823 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:58:43.393041  418823 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:58:43.459397  418823 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:58:43.459500  418823 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:58:43.459594  418823 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:58:43.467473  418823 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:58:43.472849  418823 out.go:252]   - Generating certificates and keys ...
	I1210 07:58:43.472935  418823 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:58:43.472999  418823 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:58:43.473075  418823 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:58:43.473135  418823 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:58:43.473203  418823 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:58:43.473256  418823 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:58:43.473324  418823 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:58:43.473385  418823 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:58:43.474012  418823 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:58:43.474414  418823 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:58:43.474604  418823 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:58:43.474667  418823 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:58:43.690916  418823 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:58:43.922489  418823 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:58:44.055635  418823 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:58:44.187430  418823 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:58:44.228570  418823 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:58:44.229295  418823 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:58:44.233140  418823 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:58:44.236201  418823 out.go:252]   - Booting up control plane ...
	I1210 07:58:44.236295  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:58:44.236371  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:58:44.236933  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:58:44.251863  418823 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:58:44.251964  418823 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:58:44.259287  418823 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:58:44.259598  418823 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:58:44.259801  418823 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:58:44.391514  418823 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:58:44.391627  418823 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 08:02:44.389879  418823 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00019224s
	I1210 08:02:44.389912  418823 kubeadm.go:319] 
	I1210 08:02:44.389980  418823 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 08:02:44.390013  418823 kubeadm.go:319] 	- The kubelet is not running
	I1210 08:02:44.390123  418823 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 08:02:44.390155  418823 kubeadm.go:319] 
	I1210 08:02:44.390271  418823 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 08:02:44.390303  418823 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 08:02:44.390331  418823 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 08:02:44.390335  418823 kubeadm.go:319] 
	I1210 08:02:44.395328  418823 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 08:02:44.395720  418823 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 08:02:44.395823  418823 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 08:02:44.396068  418823 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 08:02:44.396072  418823 kubeadm.go:319] 
	I1210 08:02:44.396138  418823 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 08:02:44.396188  418823 kubeadm.go:403] duration metric: took 12m7.052327562s to StartCluster
	I1210 08:02:44.396219  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:02:44.396280  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:02:44.421374  418823 cri.go:89] found id: ""
	I1210 08:02:44.421389  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.421396  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:02:44.421401  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:02:44.421463  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:02:44.447342  418823 cri.go:89] found id: ""
	I1210 08:02:44.447356  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.447363  418823 logs.go:284] No container was found matching "etcd"
	I1210 08:02:44.447368  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:02:44.447429  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:02:44.472601  418823 cri.go:89] found id: ""
	I1210 08:02:44.472614  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.472621  418823 logs.go:284] No container was found matching "coredns"
	I1210 08:02:44.472627  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:02:44.472684  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:02:44.501973  418823 cri.go:89] found id: ""
	I1210 08:02:44.501986  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.501993  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:02:44.502000  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:02:44.502059  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:02:44.527997  418823 cri.go:89] found id: ""
	I1210 08:02:44.528011  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.528018  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:02:44.528023  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:02:44.528083  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:02:44.558353  418823 cri.go:89] found id: ""
	I1210 08:02:44.558367  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.558374  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:02:44.558379  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:02:44.558439  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:02:44.583751  418823 cri.go:89] found id: ""
	I1210 08:02:44.583764  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.583771  418823 logs.go:284] No container was found matching "kindnet"
	I1210 08:02:44.583780  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 08:02:44.583792  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:02:44.598048  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:02:44.598065  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:02:44.670126  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 08:02:44.661736   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.662310   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.664031   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.664536   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.666240   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 08:02:44.661736   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.662310   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.664031   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.664536   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.666240   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:02:44.670142  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:02:44.670153  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:02:44.741133  418823 logs.go:123] Gathering logs for container status ...
	I1210 08:02:44.741153  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:02:44.768780  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 08:02:44.768797  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 08:02:44.836964  418823 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00019224s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 08:02:44.837011  418823 out.go:285] * 
	W1210 08:02:44.837080  418823 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00019224s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 08:02:44.837155  418823 out.go:285] * 
	W1210 08:02:44.839300  418823 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 08:02:44.844978  418823 out.go:203] 
	W1210 08:02:44.848781  418823 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00019224s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 08:02:44.848820  418823 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 08:02:44.848841  418823 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 08:02:44.852612  418823 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.462758438Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=30754360-77fc-41d9-961a-703309105bf4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.463612109Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=ce58a9d0-ec5e-41a7-a162-73ed5f175442 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.464131886Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=af48f6b4-c6c8-458a-8d08-3443ae3e881b name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.464662517Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=4191b9f9-c176-40bc-b3bb-ec0edd3076c8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.465135606Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=eaddf230-cd32-4499-a396-5bbd1b1cb31a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.465587147Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=300cd477-ebce-4fed-8c84-bc9781d52848 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.466022016Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=1367fa60-098e-4704-b6f3-b114a75d5405 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.067588181Z" level=info msg="Checking image status: kicbase/echo-server:functional-314220" id=9cb916ae-b10a-45e5-9f16-e18af053130e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.067769762Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.067811068Z" level=info msg="Image kicbase/echo-server:functional-314220 not found" id=9cb916ae-b10a-45e5-9f16-e18af053130e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.067872754Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-314220 found" id=9cb916ae-b10a-45e5-9f16-e18af053130e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.095996195Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-314220" id=b1a2f38f-ca46-4e03-a345-6e3600d518ba name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.096290073Z" level=info msg="Image docker.io/kicbase/echo-server:functional-314220 not found" id=b1a2f38f-ca46-4e03-a345-6e3600d518ba name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.09635135Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-314220 found" id=b1a2f38f-ca46-4e03-a345-6e3600d518ba name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.132021615Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-314220" id=2edafe31-0e02-4fc2-9f4f-edcaf98b26aa name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.132192218Z" level=info msg="Image localhost/kicbase/echo-server:functional-314220 not found" id=2edafe31-0e02-4fc2-9f4f-edcaf98b26aa name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.13224538Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-314220 found" id=2edafe31-0e02-4fc2-9f4f-edcaf98b26aa name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.091386609Z" level=info msg="Checking image status: kicbase/echo-server:functional-314220" id=48a93da4-5ee9-4739-b4c1-47bc2a67ca0c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.091538619Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.091581639Z" level=info msg="Image kicbase/echo-server:functional-314220 not found" id=48a93da4-5ee9-4739-b4c1-47bc2a67ca0c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.091640035Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-314220 found" id=48a93da4-5ee9-4739-b4c1-47bc2a67ca0c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.133859584Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-314220" id=f78161c9-d7b6-42a0-b3d9-39f237b54b13 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.133994224Z" level=info msg="Image docker.io/kicbase/echo-server:functional-314220 not found" id=f78161c9-d7b6-42a0-b3d9-39f237b54b13 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.134034315Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-314220 found" id=f78161c9-d7b6-42a0-b3d9-39f237b54b13 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.166199113Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-314220" id=16f24585-97cc-4a0b-a37c-9ad94456e987 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 08:04:58.621434   23533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:04:58.621856   23533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:04:58.623585   23533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:04:58.624178   23533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:04:58.625671   23533 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	[Dec10 07:11] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:23] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:25] overlayfs: idmapped layers are currently not supported
	[  +0.065949] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 07:31] overlayfs: idmapped layers are currently not supported
	[Dec10 07:32] overlayfs: idmapped layers are currently not supported
	[Dec10 07:50] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 08:04:58 up  2:47,  0 user,  load average: 0.24, 0.23, 0.45
	Linux functional-314220 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 08:04:56 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:04:56 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 496.
	Dec 10 08:04:56 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:04:56 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:04:56 functional-314220 kubelet[23395]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:04:56 functional-314220 kubelet[23395]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:04:56 functional-314220 kubelet[23395]: E1210 08:04:56.974797   23395 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:04:56 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:04:56 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:04:57 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 497.
	Dec 10 08:04:57 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:04:57 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:04:57 functional-314220 kubelet[23441]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:04:57 functional-314220 kubelet[23441]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:04:57 functional-314220 kubelet[23441]: E1210 08:04:57.728209   23441 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:04:57 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:04:57 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:04:58 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 498.
	Dec 10 08:04:58 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:04:58 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:04:58 functional-314220 kubelet[23496]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:04:58 functional-314220 kubelet[23496]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:04:58 functional-314220 kubelet[23496]: E1210 08:04:58.466279   23496 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:04:58 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:04:58 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220: exit status 2 (373.741474ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-314220" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (2.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-314220 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-314220 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (50.861086ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-314220 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-314220 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-314220 describe po hello-node-connect: exit status 1 (60.890489ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-314220 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-314220 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-314220 logs -l app=hello-node-connect: exit status 1 (54.624188ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-314220 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-314220 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-314220 describe svc hello-node-connect: exit status 1 (61.306666ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-314220 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-314220
helpers_test.go:244: (dbg) docker inspect functional-314220:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	        "Created": "2025-12-10T07:35:53.582567333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 407506,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:35:53.640615938Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hostname",
	        "HostsPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hosts",
	        "LogPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30-json.log",
	        "Name": "/functional-314220",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-314220:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-314220",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	                "LowerDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35-init/diff:/var/lib/docker/overlay2/888a54fd4518421bb2e30bc2f0825f232bffa2f260991e8ba288270662a6554b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-314220",
	                "Source": "/var/lib/docker/volumes/functional-314220/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-314220",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-314220",
	                "name.minikube.sigs.k8s.io": "functional-314220",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a7384df8731cf8cd48ab1dfb3eb79fa2c44cd426e6255cad7345dab2e19d0bfd",
	            "SandboxKey": "/var/run/docker/netns/a7384df8731c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-314220": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:6b:8c:59:cb:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "854df014d6f98ea92f9938b53e3af35a6df7a455941ebb6e97efa575a4d35230",
	                    "EndpointID": "285d153b9d616bc3d729ab764c8a3d3c5b28bb20787fa60a80cbf50869c0c24b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-314220",
	                        "82cb4926192d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220: exit status 2 (298.796185ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-314220 ssh sudo cat /usr/share/ca-certificates/378528.pem                                                                                      │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ image   │ functional-314220 image ls                                                                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ ssh     │ functional-314220 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ image   │ functional-314220 image save kicbase/echo-server:functional-314220 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ ssh     │ functional-314220 ssh sudo cat /etc/ssl/certs/3785282.pem                                                                                                 │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ image   │ functional-314220 image rm kicbase/echo-server:functional-314220 --alsologtostderr                                                                        │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ ssh     │ functional-314220 ssh sudo cat /usr/share/ca-certificates/3785282.pem                                                                                     │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ image   │ functional-314220 image ls                                                                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ ssh     │ functional-314220 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ image   │ functional-314220 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ ssh     │ functional-314220 ssh sudo cat /etc/test/nested/copy/378528/hosts                                                                                         │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ image   │ functional-314220 image ls                                                                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ service │ functional-314220 service list                                                                                                                            │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │                     │
	│ image   │ functional-314220 image save --daemon kicbase/echo-server:functional-314220 --alsologtostderr                                                             │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ service │ functional-314220 service list -o json                                                                                                                    │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │                     │
	│ service │ functional-314220 service --namespace=default --https --url hello-node                                                                                    │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │                     │
	│ ssh     │ functional-314220 ssh echo hello                                                                                                                          │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:03 UTC │ 10 Dec 25 08:03 UTC │
	│ service │ functional-314220 service hello-node --url --format={{.IP}}                                                                                               │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:03 UTC │                     │
	│ ssh     │ functional-314220 ssh cat /etc/hostname                                                                                                                   │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:03 UTC │ 10 Dec 25 08:03 UTC │
	│ service │ functional-314220 service hello-node --url                                                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:03 UTC │                     │
	│ tunnel  │ functional-314220 tunnel --alsologtostderr                                                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:03 UTC │                     │
	│ tunnel  │ functional-314220 tunnel --alsologtostderr                                                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:03 UTC │                     │
	│ tunnel  │ functional-314220 tunnel --alsologtostderr                                                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:03 UTC │                     │
	│ addons  │ functional-314220 addons list                                                                                                                             │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:04 UTC │ 10 Dec 25 08:04 UTC │
	│ addons  │ functional-314220 addons list -o json                                                                                                                     │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:04 UTC │ 10 Dec 25 08:04 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:50:32
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:50:32.899349  418823 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:50:32.899467  418823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:50:32.899470  418823 out.go:374] Setting ErrFile to fd 2...
	I1210 07:50:32.899475  418823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:50:32.899728  418823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:50:32.900077  418823 out.go:368] Setting JSON to false
	I1210 07:50:32.900875  418823 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9183,"bootTime":1765343850,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:50:32.900927  418823 start.go:143] virtualization:  
	I1210 07:50:32.904391  418823 out.go:179] * [functional-314220] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:50:32.909970  418823 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:50:32.910062  418823 notify.go:221] Checking for updates...
	I1210 07:50:32.913755  418823 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:50:32.917032  418823 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:50:32.919882  418823 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 07:50:32.922630  418823 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:50:32.926514  418823 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:50:32.929831  418823 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:50:32.929952  418823 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:50:32.973254  418823 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:50:32.973375  418823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:50:33.030281  418823 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 07:50:33.020639734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:50:33.030378  418823 docker.go:319] overlay module found
	I1210 07:50:33.033510  418823 out.go:179] * Using the docker driver based on existing profile
	I1210 07:50:33.036367  418823 start.go:309] selected driver: docker
	I1210 07:50:33.036393  418823 start.go:927] validating driver "docker" against &{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:50:33.036475  418823 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:50:33.036573  418823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:50:33.101667  418823 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 07:50:33.09179395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:50:33.102098  418823 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:50:33.102120  418823 cni.go:84] Creating CNI manager for ""
	I1210 07:50:33.102171  418823 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:50:33.102212  418823 start.go:353] cluster config:
	{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:50:33.107143  418823 out.go:179] * Starting "functional-314220" primary control-plane node in "functional-314220" cluster
	I1210 07:50:33.110125  418823 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 07:50:33.113004  418823 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:50:33.115816  418823 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:50:33.115854  418823 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1210 07:50:33.115862  418823 cache.go:65] Caching tarball of preloaded images
	I1210 07:50:33.115956  418823 preload.go:238] Found /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1210 07:50:33.115966  418823 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 07:50:33.115961  418823 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:50:33.116084  418823 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/config.json ...
	I1210 07:50:33.135517  418823 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:50:33.135528  418823 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:50:33.135548  418823 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:50:33.135579  418823 start.go:360] acquireMachinesLock for functional-314220: {Name:mk24b69a1578b6f8eb2fecd1b5e1ddb99787d8b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:50:33.135644  418823 start.go:364] duration metric: took 47.935µs to acquireMachinesLock for "functional-314220"
	I1210 07:50:33.135662  418823 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:50:33.135667  418823 fix.go:54] fixHost starting: 
	I1210 07:50:33.135928  418823 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:50:33.153142  418823 fix.go:112] recreateIfNeeded on functional-314220: state=Running err=<nil>
	W1210 07:50:33.153176  418823 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:50:33.156510  418823 out.go:252] * Updating the running docker "functional-314220" container ...
	I1210 07:50:33.156542  418823 machine.go:94] provisionDockerMachine start ...
	I1210 07:50:33.156629  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:33.173363  418823 main.go:143] libmachine: Using SSH client type: native
	I1210 07:50:33.173679  418823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:50:33.173685  418823 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:50:33.306701  418823 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-314220
	
	I1210 07:50:33.306715  418823 ubuntu.go:182] provisioning hostname "functional-314220"
	I1210 07:50:33.306784  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:33.323402  418823 main.go:143] libmachine: Using SSH client type: native
	I1210 07:50:33.323703  418823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:50:33.323711  418823 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-314220 && echo "functional-314220" | sudo tee /etc/hostname
	I1210 07:50:33.463802  418823 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-314220
	
	I1210 07:50:33.463873  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:33.481663  418823 main.go:143] libmachine: Using SSH client type: native
	I1210 07:50:33.481979  418823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:50:33.481993  418823 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-314220' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-314220/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-314220' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:50:33.615371  418823 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:50:33.615387  418823 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-376671/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-376671/.minikube}
	I1210 07:50:33.615415  418823 ubuntu.go:190] setting up certificates
	I1210 07:50:33.615424  418823 provision.go:84] configureAuth start
	I1210 07:50:33.615481  418823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:50:33.633344  418823 provision.go:143] copyHostCerts
	I1210 07:50:33.633409  418823 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem, removing ...
	I1210 07:50:33.633416  418823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem
	I1210 07:50:33.633490  418823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem (1082 bytes)
	I1210 07:50:33.633597  418823 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem, removing ...
	I1210 07:50:33.633601  418823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem
	I1210 07:50:33.633627  418823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem (1123 bytes)
	I1210 07:50:33.633685  418823 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem, removing ...
	I1210 07:50:33.633688  418823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem
	I1210 07:50:33.633710  418823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem (1675 bytes)
	I1210 07:50:33.633815  418823 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem org=jenkins.functional-314220 san=[127.0.0.1 192.168.49.2 functional-314220 localhost minikube]
	I1210 07:50:33.839628  418823 provision.go:177] copyRemoteCerts
	I1210 07:50:33.839683  418823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:50:33.839721  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:33.857491  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:33.954662  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:50:33.972200  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:50:33.989946  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:50:34.010600  418823 provision.go:87] duration metric: took 395.152109ms to configureAuth
	I1210 07:50:34.010620  418823 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:50:34.010837  418823 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:50:34.010945  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.031319  418823 main.go:143] libmachine: Using SSH client type: native
	I1210 07:50:34.031635  418823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:50:34.031646  418823 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 07:50:34.394456  418823 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 07:50:34.394468  418823 machine.go:97] duration metric: took 1.237919377s to provisionDockerMachine
	I1210 07:50:34.394480  418823 start.go:293] postStartSetup for "functional-314220" (driver="docker")
	I1210 07:50:34.394492  418823 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:50:34.394553  418823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:50:34.394594  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.425725  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:34.527110  418823 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:50:34.530555  418823 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:50:34.530572  418823 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:50:34.530582  418823 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/addons for local assets ...
	I1210 07:50:34.530636  418823 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/files for local assets ...
	I1210 07:50:34.530720  418823 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> 3785282.pem in /etc/ssl/certs
	I1210 07:50:34.530798  418823 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts -> hosts in /etc/test/nested/copy/378528
	I1210 07:50:34.530841  418823 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/378528
	I1210 07:50:34.538245  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 07:50:34.555946  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts --> /etc/test/nested/copy/378528/hosts (40 bytes)
	I1210 07:50:34.573402  418823 start.go:296] duration metric: took 178.908422ms for postStartSetup
	I1210 07:50:34.573478  418823 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:50:34.573515  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.591144  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:34.684092  418823 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:50:34.688828  418823 fix.go:56] duration metric: took 1.553153828s for fixHost
	I1210 07:50:34.688843  418823 start.go:83] releasing machines lock for "functional-314220", held for 1.553192081s
	I1210 07:50:34.688922  418823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:50:34.705960  418823 ssh_runner.go:195] Run: cat /version.json
	I1210 07:50:34.705982  418823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:50:34.706002  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.706033  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.724227  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:34.734363  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:34.905519  418823 ssh_runner.go:195] Run: systemctl --version
	I1210 07:50:34.911896  418823 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 07:50:34.947949  418823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:50:34.952265  418823 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:50:34.952348  418823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:50:34.960087  418823 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:50:34.960100  418823 start.go:496] detecting cgroup driver to use...
	I1210 07:50:34.960131  418823 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:50:34.960194  418823 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:50:34.975734  418823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:50:34.988235  418823 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:50:34.988306  418823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:50:35.008024  418823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:50:35.023507  418823 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:50:35.140776  418823 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:50:35.287143  418823 docker.go:234] disabling docker service ...
	I1210 07:50:35.287205  418823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:50:35.302191  418823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:50:35.316045  418823 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:50:35.435977  418823 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:50:35.558581  418823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:50:35.570905  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:50:35.584271  418823 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 07:50:35.584341  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.593128  418823 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 07:50:35.593191  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.602242  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.611204  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.619936  418823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:50:35.627869  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.636843  418823 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.645059  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.653527  418823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:50:35.660914  418823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:50:35.668098  418823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:50:35.785150  418823 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 07:50:35.938526  418823 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 07:50:35.938594  418823 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 07:50:35.943564  418823 start.go:564] Will wait 60s for crictl version
	I1210 07:50:35.943634  418823 ssh_runner.go:195] Run: which crictl
	I1210 07:50:35.950126  418823 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:50:35.976476  418823 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 07:50:35.976565  418823 ssh_runner.go:195] Run: crio --version
	I1210 07:50:36.013250  418823 ssh_runner.go:195] Run: crio --version
	I1210 07:50:36.049514  418823 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 07:50:36.052392  418823 cli_runner.go:164] Run: docker network inspect functional-314220 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:50:36.073467  418823 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 07:50:36.080871  418823 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1210 07:50:36.083861  418823 kubeadm.go:884] updating cluster {Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:50:36.084003  418823 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:50:36.084083  418823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:50:36.122033  418823 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:50:36.122045  418823 crio.go:433] Images already preloaded, skipping extraction
	I1210 07:50:36.122104  418823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:50:36.147981  418823 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:50:36.147994  418823 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:50:36.148000  418823 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1210 07:50:36.148093  418823 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-314220 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:50:36.148179  418823 ssh_runner.go:195] Run: crio config
	I1210 07:50:36.223557  418823 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1210 07:50:36.223582  418823 cni.go:84] Creating CNI manager for ""
	I1210 07:50:36.223591  418823 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:50:36.223605  418823 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:50:36.223627  418823 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-314220 NodeName:functional-314220 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:50:36.223742  418823 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-314220"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:50:36.223809  418823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:50:36.231667  418823 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:50:36.231750  418823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:50:36.239592  418823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 07:50:36.252574  418823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:50:36.265349  418823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1210 07:50:36.278251  418823 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:50:36.281864  418823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:50:36.395980  418823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:50:36.662807  418823 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220 for IP: 192.168.49.2
	I1210 07:50:36.662818  418823 certs.go:195] generating shared ca certs ...
	I1210 07:50:36.662833  418823 certs.go:227] acquiring lock for ca certs: {Name:mk8f0c12407c084bd20b81428ac0dabca9a8cbd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:50:36.662974  418823 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key
	I1210 07:50:36.663036  418823 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key
	I1210 07:50:36.663044  418823 certs.go:257] generating profile certs ...
	I1210 07:50:36.663128  418823 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key
	I1210 07:50:36.663184  418823 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key.8ae59347
	I1210 07:50:36.663221  418823 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key
	I1210 07:50:36.663326  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem (1338 bytes)
	W1210 07:50:36.663359  418823 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528_empty.pem, impossibly tiny 0 bytes
	I1210 07:50:36.663370  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:50:36.663396  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:50:36.663419  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:50:36.663444  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem (1675 bytes)
	I1210 07:50:36.663487  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 07:50:36.664085  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:50:36.684901  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 07:50:36.704871  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:50:36.724001  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:50:36.742252  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:50:36.759395  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:50:36.776213  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:50:36.793265  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:50:36.810512  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /usr/share/ca-certificates/3785282.pem (1708 bytes)
	I1210 07:50:36.828353  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:50:36.845515  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem --> /usr/share/ca-certificates/378528.pem (1338 bytes)
	I1210 07:50:36.862765  418823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:50:36.875122  418823 ssh_runner.go:195] Run: openssl version
	I1210 07:50:36.881447  418823 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3785282.pem
	I1210 07:50:36.888818  418823 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3785282.pem /etc/ssl/certs/3785282.pem
	I1210 07:50:36.896054  418823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3785282.pem
	I1210 07:50:36.899817  418823 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 07:35 /usr/share/ca-certificates/3785282.pem
	I1210 07:50:36.899876  418823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3785282.pem
	I1210 07:50:36.940839  418823 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:50:36.948274  418823 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:50:36.955506  418823 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:50:36.963139  418823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:50:36.966818  418823 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 07:25 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:50:36.966873  418823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:50:37.008344  418823 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:50:37.018542  418823 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/378528.pem
	I1210 07:50:37.028848  418823 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/378528.pem /etc/ssl/certs/378528.pem
	I1210 07:50:37.037787  418823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378528.pem
	I1210 07:50:37.041789  418823 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 07:35 /usr/share/ca-certificates/378528.pem
	I1210 07:50:37.041883  418823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378528.pem
	I1210 07:50:37.083088  418823 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:50:37.090399  418823 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:50:37.093984  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:50:37.134711  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:50:37.175584  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:50:37.216322  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:50:37.258210  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:50:37.300727  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:50:37.343870  418823 kubeadm.go:401] StartCluster: {Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:50:37.343957  418823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:50:37.344031  418823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:50:37.373693  418823 cri.go:89] found id: ""
	I1210 07:50:37.373755  418823 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:50:37.382429  418823 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:50:37.382439  418823 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:50:37.382493  418823 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:50:37.389449  418823 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:50:37.389979  418823 kubeconfig.go:125] found "functional-314220" server: "https://192.168.49.2:8441"
	I1210 07:50:37.391548  418823 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:50:37.399103  418823 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 07:36:02.271715799 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 07:50:36.273283366 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1210 07:50:37.399128  418823 kubeadm.go:1161] stopping kube-system containers ...
	I1210 07:50:37.399140  418823 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 07:50:37.399196  418823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:50:37.434614  418823 cri.go:89] found id: ""
	I1210 07:50:37.434674  418823 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 07:50:37.455844  418823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:50:37.463706  418823 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 10 07:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 10 07:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 10 07:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 10 07:40 /etc/kubernetes/scheduler.conf
	
	I1210 07:50:37.463780  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 07:50:37.471472  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 07:50:37.478782  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:50:37.478837  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:50:37.486355  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 07:50:37.493976  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:50:37.494040  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:50:37.501640  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 07:50:37.509588  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:50:37.509645  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:50:37.517276  418823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:50:37.525049  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:37.571686  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:39.573879  418823 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.002165526s)
	I1210 07:50:39.573940  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:39.780126  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:39.857417  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:39.903067  418823 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:50:39.903139  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:40.403973  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:40.903804  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:41.403355  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:41.904207  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:42.404057  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:42.903818  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:43.404234  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:43.904036  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:44.403331  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:44.904250  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:45.404168  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:45.904093  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:46.404204  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:46.904144  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:47.404213  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:47.903250  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:48.404144  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:48.904262  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:49.404011  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:49.903321  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:50.403990  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:50.903998  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:51.403914  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:51.903990  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:52.403942  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:52.903796  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:53.403576  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:53.903966  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:54.403314  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:54.904147  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:55.404245  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:55.903953  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:56.403340  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:56.904274  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:57.404124  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:57.903801  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:58.404241  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:58.903869  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:59.403287  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:59.903954  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:00.403352  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:00.904043  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:01.403894  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:01.903648  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:02.404219  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:02.903678  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:03.403948  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:03.904224  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:04.403353  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:04.904036  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:05.404217  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:05.903272  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:06.404216  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:06.903390  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:07.403379  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:07.903804  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:08.404215  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:08.904228  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:09.404143  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:09.904284  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:10.403331  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:10.904097  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:11.404225  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:11.903848  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:12.403282  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:12.903360  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:13.403955  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:13.903329  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:14.404081  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:14.903215  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:15.403223  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:15.903728  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:16.403337  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:16.904035  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:17.403389  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:17.904062  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:18.403915  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:18.903844  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:19.403287  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:19.903456  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:20.403269  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:20.903919  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:21.403294  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:21.903959  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:22.403330  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:22.903425  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:23.403210  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:23.904289  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:24.403468  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:24.903578  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:25.403340  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:25.903276  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:26.404241  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:26.903945  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:27.404152  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:27.903337  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:28.404037  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:28.903401  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:29.403321  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:29.904015  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:30.403353  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:30.904231  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.403897  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.903428  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:32.404285  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:32.904059  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:33.403419  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:33.903340  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:34.404109  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:34.903323  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:35.404151  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:35.903331  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.403229  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.904295  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:37.403231  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:37.904159  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:38.403982  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:38.903898  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:39.403315  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:39.903344  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:39.903423  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:39.933715  418823 cri.go:89] found id: ""
	I1210 07:51:39.933730  418823 logs.go:282] 0 containers: []
	W1210 07:51:39.933737  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:39.933741  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:39.933807  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:39.959343  418823 cri.go:89] found id: ""
	I1210 07:51:39.959358  418823 logs.go:282] 0 containers: []
	W1210 07:51:39.959366  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:39.959371  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:39.959428  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:39.985280  418823 cri.go:89] found id: ""
	I1210 07:51:39.985294  418823 logs.go:282] 0 containers: []
	W1210 07:51:39.985302  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:39.985307  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:39.985366  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:40.021888  418823 cri.go:89] found id: ""
	I1210 07:51:40.021904  418823 logs.go:282] 0 containers: []
	W1210 07:51:40.021912  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:40.021917  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:40.022019  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:40.050222  418823 cri.go:89] found id: ""
	I1210 07:51:40.050238  418823 logs.go:282] 0 containers: []
	W1210 07:51:40.050245  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:40.050251  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:40.050314  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:40.076513  418823 cri.go:89] found id: ""
	I1210 07:51:40.076528  418823 logs.go:282] 0 containers: []
	W1210 07:51:40.076536  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:40.076541  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:40.076603  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:40.106190  418823 cri.go:89] found id: ""
	I1210 07:51:40.106206  418823 logs.go:282] 0 containers: []
	W1210 07:51:40.106213  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:40.106221  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:40.106232  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:40.171760  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:40.171781  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:40.188577  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:40.188594  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:40.259869  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:40.250864   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.251869   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.253556   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.254191   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.255824   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:40.250864   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.251869   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.253556   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.254191   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.255824   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:40.259893  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:40.259905  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:40.330751  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:40.330772  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:42.864666  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:42.875209  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:42.875278  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:42.906775  418823 cri.go:89] found id: ""
	I1210 07:51:42.906788  418823 logs.go:282] 0 containers: []
	W1210 07:51:42.906796  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:42.906802  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:42.906860  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:42.932120  418823 cri.go:89] found id: ""
	I1210 07:51:42.932134  418823 logs.go:282] 0 containers: []
	W1210 07:51:42.932142  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:42.932147  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:42.932207  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:42.960769  418823 cri.go:89] found id: ""
	I1210 07:51:42.960784  418823 logs.go:282] 0 containers: []
	W1210 07:51:42.960793  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:42.960798  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:42.960857  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:42.986269  418823 cri.go:89] found id: ""
	I1210 07:51:42.986285  418823 logs.go:282] 0 containers: []
	W1210 07:51:42.986294  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:42.986299  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:42.986361  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:43.021139  418823 cri.go:89] found id: ""
	I1210 07:51:43.021155  418823 logs.go:282] 0 containers: []
	W1210 07:51:43.021163  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:43.021168  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:43.021241  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:43.047486  418823 cri.go:89] found id: ""
	I1210 07:51:43.047501  418823 logs.go:282] 0 containers: []
	W1210 07:51:43.047508  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:43.047513  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:43.047576  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:43.073233  418823 cri.go:89] found id: ""
	I1210 07:51:43.073247  418823 logs.go:282] 0 containers: []
	W1210 07:51:43.073255  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:43.073263  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:43.073273  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:43.139078  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:43.139105  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:43.153579  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:43.153595  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:43.240938  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:43.232595   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.233028   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.234681   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.235233   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.236893   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:43.232595   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.233028   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.234681   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.235233   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.236893   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:43.240958  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:43.240970  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:43.308772  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:43.308794  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:45.841619  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:45.852276  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:45.852345  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:45.887199  418823 cri.go:89] found id: ""
	I1210 07:51:45.887215  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.887222  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:45.887237  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:45.887324  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:45.918859  418823 cri.go:89] found id: ""
	I1210 07:51:45.918873  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.918880  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:45.918885  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:45.918944  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:45.943991  418823 cri.go:89] found id: ""
	I1210 07:51:45.944006  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.944014  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:45.944019  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:45.944088  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:45.970351  418823 cri.go:89] found id: ""
	I1210 07:51:45.970371  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.970379  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:45.970384  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:45.970444  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:45.995587  418823 cri.go:89] found id: ""
	I1210 07:51:45.995601  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.995609  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:45.995614  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:45.995678  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:46.023570  418823 cri.go:89] found id: ""
	I1210 07:51:46.023586  418823 logs.go:282] 0 containers: []
	W1210 07:51:46.023593  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:46.023599  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:46.023660  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:46.056294  418823 cri.go:89] found id: ""
	I1210 07:51:46.056309  418823 logs.go:282] 0 containers: []
	W1210 07:51:46.056317  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:46.056325  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:46.056336  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:46.125021  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:46.125041  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:46.139709  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:46.139728  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:46.233096  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:46.221808   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.223333   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.227401   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.227730   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.229200   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:46.221808   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.223333   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.227401   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.227730   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.229200   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:46.233116  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:46.233127  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:46.302440  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:46.302460  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:48.833091  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:48.843740  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:48.843804  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:48.869041  418823 cri.go:89] found id: ""
	I1210 07:51:48.869057  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.869064  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:48.869070  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:48.869139  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:48.893750  418823 cri.go:89] found id: ""
	I1210 07:51:48.893765  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.893784  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:48.893790  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:48.893850  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:48.919315  418823 cri.go:89] found id: ""
	I1210 07:51:48.919330  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.919337  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:48.919343  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:48.919413  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:48.944091  418823 cri.go:89] found id: ""
	I1210 07:51:48.944107  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.944114  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:48.944120  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:48.944178  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:48.968980  418823 cri.go:89] found id: ""
	I1210 07:51:48.968995  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.969002  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:48.969007  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:48.969066  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:48.994258  418823 cri.go:89] found id: ""
	I1210 07:51:48.994272  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.994279  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:48.994294  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:48.994354  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:49.021988  418823 cri.go:89] found id: ""
	I1210 07:51:49.022004  418823 logs.go:282] 0 containers: []
	W1210 07:51:49.022012  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:49.022019  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:49.022029  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:49.089579  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:49.089605  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:49.118629  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:49.118648  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:49.191180  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:49.191204  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:49.208309  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:49.208325  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:49.273461  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:49.264556   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.265266   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.266981   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.267634   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.269070   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:49.264556   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.265266   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.266981   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.267634   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.269070   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:51.775168  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:51.785506  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:51.785567  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:51.810828  418823 cri.go:89] found id: ""
	I1210 07:51:51.810843  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.810860  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:51.810865  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:51.810926  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:51.835270  418823 cri.go:89] found id: ""
	I1210 07:51:51.835285  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.835292  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:51.835297  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:51.835357  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:51.862106  418823 cri.go:89] found id: ""
	I1210 07:51:51.862121  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.862129  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:51.862134  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:51.862203  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:51.887726  418823 cri.go:89] found id: ""
	I1210 07:51:51.887741  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.887749  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:51.887754  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:51.887816  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:51.916383  418823 cri.go:89] found id: ""
	I1210 07:51:51.916398  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.916405  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:51.916409  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:51.916479  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:51.945251  418823 cri.go:89] found id: ""
	I1210 07:51:51.945266  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.945273  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:51.945278  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:51.945337  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:51.970333  418823 cri.go:89] found id: ""
	I1210 07:51:51.970348  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.970357  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:51.970365  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:51.970385  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:51.998969  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:51.998986  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:52.071390  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:52.071420  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:52.087389  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:52.087406  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:52.154961  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:52.145998   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.146914   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.148561   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.149089   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.150792   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:52.145998   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.146914   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.148561   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.149089   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.150792   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:52.154973  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:52.154985  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:54.734714  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:54.745090  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:54.745151  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:54.770064  418823 cri.go:89] found id: ""
	I1210 07:51:54.770079  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.770086  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:54.770091  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:54.770149  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:54.796152  418823 cri.go:89] found id: ""
	I1210 07:51:54.796167  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.796174  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:54.796179  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:54.796241  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:54.822080  418823 cri.go:89] found id: ""
	I1210 07:51:54.822095  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.822102  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:54.822107  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:54.822175  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:54.849868  418823 cri.go:89] found id: ""
	I1210 07:51:54.849883  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.849891  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:54.849895  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:54.849951  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:54.875726  418823 cri.go:89] found id: ""
	I1210 07:51:54.875741  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.875748  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:54.875753  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:54.875815  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:54.905509  418823 cri.go:89] found id: ""
	I1210 07:51:54.905524  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.905531  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:54.905536  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:54.905595  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:54.931115  418823 cri.go:89] found id: ""
	I1210 07:51:54.931138  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.931146  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:54.931154  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:54.931164  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:54.997885  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:54.997906  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:55.030067  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:55.030094  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:55.099098  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:55.099116  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:55.113912  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:55.113934  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:55.200955  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:55.191641   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.192478   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.194192   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.195162   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.196812   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:55.191641   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.192478   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.194192   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.195162   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.196812   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:57.701770  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:57.712296  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:57.712359  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:57.742200  418823 cri.go:89] found id: ""
	I1210 07:51:57.742217  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.742225  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:57.742230  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:57.742288  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:57.770042  418823 cri.go:89] found id: ""
	I1210 07:51:57.770056  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.770063  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:57.770068  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:57.770126  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:57.795451  418823 cri.go:89] found id: ""
	I1210 07:51:57.795464  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.795471  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:57.795477  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:57.795536  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:57.823068  418823 cri.go:89] found id: ""
	I1210 07:51:57.823084  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.823091  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:57.823097  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:57.823160  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:57.849968  418823 cri.go:89] found id: ""
	I1210 07:51:57.849982  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.849998  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:57.850003  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:57.850064  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:57.877868  418823 cri.go:89] found id: ""
	I1210 07:51:57.877881  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.877889  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:57.877894  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:57.877954  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:57.903803  418823 cri.go:89] found id: ""
	I1210 07:51:57.903823  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.903830  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:57.903838  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:57.903849  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:57.970812  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:57.970831  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:57.985765  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:57.985786  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:58.070052  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:58.061598   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.062333   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.063943   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.064517   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.066053   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:58.061598   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.062333   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.063943   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.064517   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.066053   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:58.070062  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:58.070076  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:58.138971  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:58.138993  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:00.678904  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:00.689904  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:00.689965  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:00.717867  418823 cri.go:89] found id: ""
	I1210 07:52:00.717882  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.717889  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:00.717895  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:00.717960  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:00.746728  418823 cri.go:89] found id: ""
	I1210 07:52:00.746743  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.746750  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:00.746755  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:00.746815  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:00.771995  418823 cri.go:89] found id: ""
	I1210 07:52:00.772009  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.772016  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:00.772021  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:00.772084  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:00.801311  418823 cri.go:89] found id: ""
	I1210 07:52:00.801326  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.801333  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:00.801338  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:00.801400  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:00.827977  418823 cri.go:89] found id: ""
	I1210 07:52:00.827992  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.827999  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:00.828004  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:00.828064  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:00.857640  418823 cri.go:89] found id: ""
	I1210 07:52:00.857653  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.857661  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:00.857666  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:00.857723  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:00.886162  418823 cri.go:89] found id: ""
	I1210 07:52:00.886176  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.886183  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:00.886192  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:00.886203  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:00.900682  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:00.900699  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:00.962996  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:00.954850   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.955457   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.957002   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.957542   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.959068   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:00.954850   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.955457   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.957002   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.957542   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.959068   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:00.963006  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:00.963044  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:01.030923  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:01.030945  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:01.064661  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:01.064678  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:03.634114  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:03.644373  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:03.644437  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:03.670228  418823 cri.go:89] found id: ""
	I1210 07:52:03.670242  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.670250  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:03.670255  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:03.670313  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:03.697715  418823 cri.go:89] found id: ""
	I1210 07:52:03.697730  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.697737  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:03.697742  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:03.697800  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:03.725317  418823 cri.go:89] found id: ""
	I1210 07:52:03.725331  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.725338  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:03.725344  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:03.725406  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:03.754932  418823 cri.go:89] found id: ""
	I1210 07:52:03.754947  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.754954  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:03.754959  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:03.755055  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:03.781710  418823 cri.go:89] found id: ""
	I1210 07:52:03.781724  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.781731  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:03.781736  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:03.781799  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:03.806748  418823 cri.go:89] found id: ""
	I1210 07:52:03.806761  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.806769  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:03.806773  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:03.806839  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:03.831941  418823 cri.go:89] found id: ""
	I1210 07:52:03.831956  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.831963  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:03.831970  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:03.831980  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:03.893889  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:03.885770   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.886207   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.887763   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.888077   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.889552   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:03.885770   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.886207   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.887763   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.888077   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.889552   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:03.893899  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:03.893910  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:03.963740  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:03.963762  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:03.994617  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:03.994633  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:04.064848  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:04.064869  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:06.580763  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:06.590814  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:06.590876  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:06.617862  418823 cri.go:89] found id: ""
	I1210 07:52:06.617877  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.617884  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:06.617889  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:06.617952  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:06.642344  418823 cri.go:89] found id: ""
	I1210 07:52:06.642364  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.642372  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:06.642376  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:06.642434  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:06.668168  418823 cri.go:89] found id: ""
	I1210 07:52:06.668181  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.668189  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:06.668194  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:06.668252  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:06.693569  418823 cri.go:89] found id: ""
	I1210 07:52:06.693584  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.693591  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:06.693596  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:06.693655  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:06.719248  418823 cri.go:89] found id: ""
	I1210 07:52:06.719272  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.719281  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:06.719286  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:06.719353  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:06.744269  418823 cri.go:89] found id: ""
	I1210 07:52:06.744298  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.744306  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:06.744311  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:06.744384  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:06.769456  418823 cri.go:89] found id: ""
	I1210 07:52:06.769485  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.769493  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:06.769501  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:06.769520  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:06.835122  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:06.826477   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.827191   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.828919   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.829490   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.831268   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:06.826477   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.827191   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.828919   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.829490   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.831268   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:06.835134  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:06.835145  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:06.903874  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:06.903896  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:06.932245  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:06.932261  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:06.999686  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:06.999707  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:09.516631  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:09.527151  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:09.527214  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:09.553162  418823 cri.go:89] found id: ""
	I1210 07:52:09.553175  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.553182  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:09.553187  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:09.553248  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:09.577770  418823 cri.go:89] found id: ""
	I1210 07:52:09.577785  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.577792  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:09.577797  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:09.577857  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:09.603741  418823 cri.go:89] found id: ""
	I1210 07:52:09.603755  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.603765  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:09.603770  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:09.603830  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:09.631507  418823 cri.go:89] found id: ""
	I1210 07:52:09.631521  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.631529  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:09.631534  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:09.631597  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:09.657315  418823 cri.go:89] found id: ""
	I1210 07:52:09.657329  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.657342  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:09.657347  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:09.657406  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:09.682591  418823 cri.go:89] found id: ""
	I1210 07:52:09.682606  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.682613  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:09.682619  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:09.682677  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:09.708020  418823 cri.go:89] found id: ""
	I1210 07:52:09.708034  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.708042  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:09.708049  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:09.708062  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:09.777964  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:09.777985  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:09.792349  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:09.792367  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:09.854411  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:09.846045   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.846788   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.848379   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.848685   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.850192   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:09.846045   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.846788   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.848379   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.848685   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.850192   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:09.854421  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:09.854434  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:09.922233  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:09.922255  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:12.457145  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:12.468643  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:12.468721  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:12.494760  418823 cri.go:89] found id: ""
	I1210 07:52:12.494774  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.494782  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:12.494787  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:12.494853  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:12.520639  418823 cri.go:89] found id: ""
	I1210 07:52:12.520653  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.520673  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:12.520678  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:12.520738  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:12.546812  418823 cri.go:89] found id: ""
	I1210 07:52:12.546827  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.546834  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:12.546839  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:12.546899  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:12.573531  418823 cri.go:89] found id: ""
	I1210 07:52:12.573546  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.573553  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:12.573558  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:12.573623  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:12.600389  418823 cri.go:89] found id: ""
	I1210 07:52:12.600403  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.600411  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:12.600416  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:12.600475  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:12.630232  418823 cri.go:89] found id: ""
	I1210 07:52:12.630257  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.630265  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:12.630271  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:12.630340  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:12.656013  418823 cri.go:89] found id: ""
	I1210 07:52:12.656027  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.656035  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:12.656042  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:12.656058  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:12.727638  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:12.727667  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:12.742877  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:12.742895  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:12.807790  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:12.799348   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.800103   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.801856   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.802374   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.803909   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:12.799348   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.800103   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.801856   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.802374   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.803909   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:12.807802  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:12.807814  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:12.876103  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:12.876124  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:15.409499  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:15.424003  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:15.424080  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:15.458307  418823 cri.go:89] found id: ""
	I1210 07:52:15.458341  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.458348  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:15.458353  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:15.458428  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:15.488619  418823 cri.go:89] found id: ""
	I1210 07:52:15.488634  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.488641  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:15.488646  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:15.488709  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:15.513795  418823 cri.go:89] found id: ""
	I1210 07:52:15.513809  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.513817  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:15.513831  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:15.513888  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:15.539219  418823 cri.go:89] found id: ""
	I1210 07:52:15.539233  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.539240  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:15.539245  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:15.539305  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:15.565461  418823 cri.go:89] found id: ""
	I1210 07:52:15.565475  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.565490  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:15.565495  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:15.565554  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:15.597327  418823 cri.go:89] found id: ""
	I1210 07:52:15.597341  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.597348  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:15.597354  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:15.597412  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:15.622974  418823 cri.go:89] found id: ""
	I1210 07:52:15.622994  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.623001  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:15.623047  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:15.623059  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:15.690204  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:15.681087   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.681887   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.683667   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.684195   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.685805   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:15.681087   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.681887   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.683667   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.684195   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.685805   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:15.690215  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:15.690226  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:15.758230  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:15.758252  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:15.788867  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:15.788884  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:15.856134  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:15.856154  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:18.371925  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:18.382408  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:18.382482  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:18.408893  418823 cri.go:89] found id: ""
	I1210 07:52:18.408907  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.408914  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:18.408919  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:18.408994  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:18.444341  418823 cri.go:89] found id: ""
	I1210 07:52:18.444355  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.444374  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:18.444380  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:18.444450  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:18.476809  418823 cri.go:89] found id: ""
	I1210 07:52:18.476823  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.476830  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:18.476835  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:18.476892  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:18.503052  418823 cri.go:89] found id: ""
	I1210 07:52:18.503066  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.503073  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:18.503078  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:18.503150  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:18.529967  418823 cri.go:89] found id: ""
	I1210 07:52:18.529981  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.529998  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:18.530003  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:18.530095  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:18.555604  418823 cri.go:89] found id: ""
	I1210 07:52:18.555619  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.555626  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:18.555631  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:18.555692  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:18.580758  418823 cri.go:89] found id: ""
	I1210 07:52:18.580773  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.580781  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:18.580789  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:18.580803  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:18.649536  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:18.641471   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.642112   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.643861   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.644377   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.645509   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:18.641471   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.642112   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.643861   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.644377   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.645509   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:18.649546  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:18.649558  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:18.720152  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:18.720174  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:18.749804  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:18.749823  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:18.819943  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:18.819965  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:21.337138  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:21.347127  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:21.347189  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:21.373895  418823 cri.go:89] found id: ""
	I1210 07:52:21.373918  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.373926  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:21.373931  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:21.373998  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:21.399869  418823 cri.go:89] found id: ""
	I1210 07:52:21.399896  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.399903  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:21.399908  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:21.399979  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:21.427202  418823 cri.go:89] found id: ""
	I1210 07:52:21.427219  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.427226  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:21.427231  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:21.427299  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:21.458325  418823 cri.go:89] found id: ""
	I1210 07:52:21.458348  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.458355  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:21.458360  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:21.458429  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:21.488232  418823 cri.go:89] found id: ""
	I1210 07:52:21.488246  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.488253  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:21.488259  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:21.488318  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:21.523678  418823 cri.go:89] found id: ""
	I1210 07:52:21.523693  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.523700  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:21.523706  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:21.523774  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:21.554053  418823 cri.go:89] found id: ""
	I1210 07:52:21.554068  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.554076  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:21.554084  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:21.554094  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:21.584626  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:21.584643  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:21.650495  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:21.650516  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:21.665376  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:21.665393  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:21.728186  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:21.719729   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.720322   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.722158   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.722717   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.724385   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:21.719729   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.720322   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.722158   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.722717   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.724385   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:21.728197  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:21.728210  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:24.296826  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:24.306876  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:24.306941  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:24.331566  418823 cri.go:89] found id: ""
	I1210 07:52:24.331580  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.331587  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:24.331592  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:24.331654  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:24.364290  418823 cri.go:89] found id: ""
	I1210 07:52:24.364304  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.364312  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:24.364317  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:24.364375  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:24.394840  418823 cri.go:89] found id: ""
	I1210 07:52:24.394855  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.394863  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:24.394871  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:24.394927  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:24.423155  418823 cri.go:89] found id: ""
	I1210 07:52:24.423169  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.423176  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:24.423181  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:24.423237  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:24.448495  418823 cri.go:89] found id: ""
	I1210 07:52:24.448509  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.448517  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:24.448522  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:24.448582  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:24.473213  418823 cri.go:89] found id: ""
	I1210 07:52:24.473228  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.473244  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:24.473250  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:24.473311  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:24.498332  418823 cri.go:89] found id: ""
	I1210 07:52:24.498346  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.498363  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:24.498371  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:24.498386  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:24.512582  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:24.512599  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:24.576630  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:24.568982   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.569512   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.570996   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.571433   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.572843   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:24.568982   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.569512   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.570996   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.571433   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.572843   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:24.576640  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:24.576651  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:24.643309  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:24.643329  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:24.671954  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:24.671973  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:27.241302  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:27.251489  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:27.251554  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:27.276224  418823 cri.go:89] found id: ""
	I1210 07:52:27.276239  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.276247  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:27.276252  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:27.276315  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:27.302841  418823 cri.go:89] found id: ""
	I1210 07:52:27.302855  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.302862  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:27.302867  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:27.302934  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:27.329134  418823 cri.go:89] found id: ""
	I1210 07:52:27.329148  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.329155  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:27.329160  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:27.329217  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:27.355218  418823 cri.go:89] found id: ""
	I1210 07:52:27.355233  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.355240  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:27.355245  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:27.355310  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:27.380928  418823 cri.go:89] found id: ""
	I1210 07:52:27.380942  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.380948  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:27.380953  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:27.381016  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:27.405139  418823 cri.go:89] found id: ""
	I1210 07:52:27.405153  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.405160  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:27.405165  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:27.405224  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:27.434261  418823 cri.go:89] found id: ""
	I1210 07:52:27.434274  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.434281  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:27.434288  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:27.434308  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:27.512344  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:27.512364  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:27.526600  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:27.526616  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:27.593338  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:27.585550   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.586220   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.587805   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.588181   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.589629   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:27.585550   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.586220   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.587805   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.588181   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.589629   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:27.593348  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:27.593360  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:27.660306  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:27.660330  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:30.190245  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:30.200692  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:30.200762  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:30.225476  418823 cri.go:89] found id: ""
	I1210 07:52:30.225491  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.225498  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:30.225503  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:30.225561  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:30.252256  418823 cri.go:89] found id: ""
	I1210 07:52:30.252270  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.252277  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:30.252282  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:30.252339  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:30.277929  418823 cri.go:89] found id: ""
	I1210 07:52:30.277943  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.277950  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:30.277955  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:30.278013  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:30.303604  418823 cri.go:89] found id: ""
	I1210 07:52:30.303619  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.303627  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:30.303631  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:30.303695  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:30.328592  418823 cri.go:89] found id: ""
	I1210 07:52:30.328606  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.328620  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:30.328625  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:30.328683  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:30.357680  418823 cri.go:89] found id: ""
	I1210 07:52:30.357694  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.357701  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:30.357706  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:30.357772  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:30.383058  418823 cri.go:89] found id: ""
	I1210 07:52:30.383071  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.383085  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:30.383093  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:30.383103  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:30.451001  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:30.451264  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:30.466690  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:30.466709  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:30.535653  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:30.527209   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.528061   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.529830   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.530197   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.531734   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:30.527209   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.528061   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.529830   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.530197   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.531734   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:30.535662  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:30.535673  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:30.603957  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:30.603978  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:33.138030  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:33.148615  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:33.148680  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:33.174834  418823 cri.go:89] found id: ""
	I1210 07:52:33.174848  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.174855  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:33.174860  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:33.174922  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:33.205206  418823 cri.go:89] found id: ""
	I1210 07:52:33.205221  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.205228  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:33.205233  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:33.205296  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:33.235457  418823 cri.go:89] found id: ""
	I1210 07:52:33.235472  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.235480  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:33.235485  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:33.235548  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:33.260204  418823 cri.go:89] found id: ""
	I1210 07:52:33.260218  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.260225  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:33.260230  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:33.260290  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:33.285426  418823 cri.go:89] found id: ""
	I1210 07:52:33.285440  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.285448  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:33.285453  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:33.285513  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:33.310040  418823 cri.go:89] found id: ""
	I1210 07:52:33.310054  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.310068  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:33.310073  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:33.310135  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:33.334636  418823 cri.go:89] found id: ""
	I1210 07:52:33.334650  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.334658  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:33.334665  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:33.334676  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:33.400914  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:33.392977   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.393712   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.395357   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.395749   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.397284   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:33.392977   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.393712   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.395357   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.395749   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.397284   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:33.400923  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:33.400934  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:33.489102  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:33.489132  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:33.523301  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:33.523319  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:33.590429  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:33.590450  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:36.107174  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:36.117293  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:36.117353  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:36.141455  418823 cri.go:89] found id: ""
	I1210 07:52:36.141469  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.141477  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:36.141482  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:36.141541  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:36.172812  418823 cri.go:89] found id: ""
	I1210 07:52:36.172826  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.172833  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:36.172838  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:36.172901  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:36.201760  418823 cri.go:89] found id: ""
	I1210 07:52:36.201774  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.201781  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:36.201786  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:36.201845  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:36.227525  418823 cri.go:89] found id: ""
	I1210 07:52:36.227539  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.227553  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:36.227558  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:36.227617  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:36.255643  418823 cri.go:89] found id: ""
	I1210 07:52:36.255657  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.255664  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:36.255669  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:36.255729  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:36.281030  418823 cri.go:89] found id: ""
	I1210 07:52:36.281044  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.281052  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:36.281057  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:36.281115  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:36.307190  418823 cri.go:89] found id: ""
	I1210 07:52:36.307204  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.307211  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:36.307219  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:36.307231  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:36.321687  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:36.321705  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:36.383640  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:36.374708   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.375130   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.376841   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.377402   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.379213   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:36.374708   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.375130   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.376841   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.377402   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.379213   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:36.383650  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:36.383672  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:36.452123  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:36.452142  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:36.485724  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:36.485743  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:39.051733  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:39.062052  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:39.062152  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:39.086707  418823 cri.go:89] found id: ""
	I1210 07:52:39.086722  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.086729  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:39.086734  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:39.086793  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:39.111720  418823 cri.go:89] found id: ""
	I1210 07:52:39.111734  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.111742  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:39.111747  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:39.111807  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:39.135349  418823 cri.go:89] found id: ""
	I1210 07:52:39.135364  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.135371  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:39.135376  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:39.135435  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:39.160834  418823 cri.go:89] found id: ""
	I1210 07:52:39.160857  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.160865  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:39.160871  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:39.160938  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:39.189613  418823 cri.go:89] found id: ""
	I1210 07:52:39.189626  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.189634  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:39.189639  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:39.189696  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:39.214373  418823 cri.go:89] found id: ""
	I1210 07:52:39.214387  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.214394  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:39.214400  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:39.214457  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:39.239814  418823 cri.go:89] found id: ""
	I1210 07:52:39.239829  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.239837  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:39.239845  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:39.239856  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:39.304237  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:39.304257  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:39.320565  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:39.320583  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:39.389276  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:39.381128   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.381864   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.383397   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.383829   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.385337   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:39.381128   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.381864   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.383397   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.383829   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.385337   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:39.389286  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:39.389297  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:39.466908  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:39.466930  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:42.005528  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:42.023294  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:42.023367  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:42.058874  418823 cri.go:89] found id: ""
	I1210 07:52:42.058903  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.058911  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:42.058932  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:42.059040  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:42.089784  418823 cri.go:89] found id: ""
	I1210 07:52:42.089801  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.089809  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:42.089814  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:42.089881  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:42.121634  418823 cri.go:89] found id: ""
	I1210 07:52:42.121650  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.121658  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:42.121663  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:42.121737  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:42.153538  418823 cri.go:89] found id: ""
	I1210 07:52:42.153555  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.153563  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:42.153569  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:42.153644  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:42.183586  418823 cri.go:89] found id: ""
	I1210 07:52:42.183603  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.183611  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:42.183619  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:42.183688  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:42.213049  418823 cri.go:89] found id: ""
	I1210 07:52:42.213067  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.213078  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:42.213084  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:42.213165  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:42.242211  418823 cri.go:89] found id: ""
	I1210 07:52:42.242229  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.242241  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:42.242250  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:42.242268  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:42.258546  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:42.258571  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:42.332221  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:42.323428   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.324132   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.325892   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.326478   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.328088   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:42.323428   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.324132   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.325892   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.326478   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.328088   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:42.332230  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:42.332241  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:42.398832  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:42.398851  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:42.439292  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:42.439308  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:45.012889  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:45.052510  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:45.052580  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:45.096465  418823 cri.go:89] found id: ""
	I1210 07:52:45.096488  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.096496  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:45.096501  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:45.096574  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:45.131426  418823 cri.go:89] found id: ""
	I1210 07:52:45.131442  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.131450  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:45.131456  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:45.131530  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:45.179314  418823 cri.go:89] found id: ""
	I1210 07:52:45.179331  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.179340  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:45.179345  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:45.179416  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:45.224508  418823 cri.go:89] found id: ""
	I1210 07:52:45.224525  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.224534  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:45.224540  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:45.224616  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:45.259822  418823 cri.go:89] found id: ""
	I1210 07:52:45.259850  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.259859  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:45.259870  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:45.259980  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:45.289141  418823 cri.go:89] found id: ""
	I1210 07:52:45.289157  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.289164  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:45.289170  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:45.289256  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:45.317720  418823 cri.go:89] found id: ""
	I1210 07:52:45.317749  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.317764  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:45.317796  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:45.317831  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:45.385230  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:45.375821   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.376581   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.378081   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.378688   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.380382   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:45.375821   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.376581   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.378081   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.378688   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.380382   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:45.385240  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:45.385251  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:45.456646  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:45.456667  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:45.489700  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:45.489717  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:45.554187  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:45.554206  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:48.069065  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:48.079822  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:48.079950  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:48.110229  418823 cri.go:89] found id: ""
	I1210 07:52:48.110244  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.110251  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:48.110256  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:48.110317  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:48.138842  418823 cri.go:89] found id: ""
	I1210 07:52:48.138856  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.138864  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:48.138869  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:48.138928  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:48.164708  418823 cri.go:89] found id: ""
	I1210 07:52:48.164722  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.164730  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:48.164735  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:48.164793  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:48.190030  418823 cri.go:89] found id: ""
	I1210 07:52:48.190056  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.190063  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:48.190069  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:48.190160  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:48.214783  418823 cri.go:89] found id: ""
	I1210 07:52:48.214798  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.214824  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:48.214830  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:48.214899  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:48.242669  418823 cri.go:89] found id: ""
	I1210 07:52:48.242684  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.242692  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:48.242697  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:48.242758  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:48.269761  418823 cri.go:89] found id: ""
	I1210 07:52:48.269776  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.269784  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:48.269791  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:48.269802  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:48.334847  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:48.334871  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:48.349781  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:48.349796  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:48.422853  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:48.408060   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.408898   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.411067   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.411847   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.413714   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:48.408060   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.408898   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.411067   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.411847   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.413714   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:48.422867  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:48.422877  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:48.504694  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:48.504717  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:51.036528  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:51.046592  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:51.046665  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:51.073731  418823 cri.go:89] found id: ""
	I1210 07:52:51.073746  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.073753  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:51.073759  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:51.073819  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:51.100005  418823 cri.go:89] found id: ""
	I1210 07:52:51.100019  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.100027  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:51.100031  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:51.100095  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:51.125872  418823 cri.go:89] found id: ""
	I1210 07:52:51.125897  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.125905  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:51.125910  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:51.125970  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:51.151761  418823 cri.go:89] found id: ""
	I1210 07:52:51.151775  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.151783  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:51.151788  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:51.151846  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:51.178046  418823 cri.go:89] found id: ""
	I1210 07:52:51.178060  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.178068  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:51.178074  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:51.178143  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:51.205729  418823 cri.go:89] found id: ""
	I1210 07:52:51.205743  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.205750  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:51.205756  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:51.205813  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:51.231485  418823 cri.go:89] found id: ""
	I1210 07:52:51.231498  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.231505  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:51.231512  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:51.231522  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:51.295749  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:51.295769  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:51.310814  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:51.310832  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:51.374238  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:51.365806   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.366220   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.367678   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.368628   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.370186   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:51.365806   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.366220   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.367678   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.368628   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.370186   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:51.374248  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:51.374260  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:51.442190  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:51.442209  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:53.979674  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:53.989805  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:53.989873  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:54.022480  418823 cri.go:89] found id: ""
	I1210 07:52:54.022494  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.022501  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:54.022507  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:54.022571  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:54.049837  418823 cri.go:89] found id: ""
	I1210 07:52:54.049851  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.049858  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:54.049864  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:54.049924  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:54.079149  418823 cri.go:89] found id: ""
	I1210 07:52:54.079164  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.079172  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:54.079177  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:54.079244  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:54.110317  418823 cri.go:89] found id: ""
	I1210 07:52:54.110332  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.110339  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:54.110344  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:54.110401  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:54.137776  418823 cri.go:89] found id: ""
	I1210 07:52:54.137798  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.137806  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:54.137812  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:54.137873  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:54.162601  418823 cri.go:89] found id: ""
	I1210 07:52:54.162615  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.162622  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:54.162629  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:54.162690  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:54.188677  418823 cri.go:89] found id: ""
	I1210 07:52:54.188691  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.188698  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:54.188706  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:54.188720  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:54.255918  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:54.255940  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:54.270493  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:54.270513  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:54.347104  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:54.330795   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.331491   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.333174   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.333794   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.335404   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:54.330795   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.331491   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.333174   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.333794   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.335404   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:54.347114  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:54.347127  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:54.415651  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:54.415676  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:56.950504  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:56.960908  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:56.960974  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:56.986942  418823 cri.go:89] found id: ""
	I1210 07:52:56.986957  418823 logs.go:282] 0 containers: []
	W1210 07:52:56.986964  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:56.986969  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:56.987046  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:57.014060  418823 cri.go:89] found id: ""
	I1210 07:52:57.014088  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.014095  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:57.014100  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:57.014192  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:57.040046  418823 cri.go:89] found id: ""
	I1210 07:52:57.040061  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.040069  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:57.040075  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:57.040139  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:57.065400  418823 cri.go:89] found id: ""
	I1210 07:52:57.065427  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.065435  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:57.065441  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:57.065511  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:57.094105  418823 cri.go:89] found id: ""
	I1210 07:52:57.094127  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.094135  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:57.094140  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:57.094203  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:57.120409  418823 cri.go:89] found id: ""
	I1210 07:52:57.120425  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.120432  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:57.120438  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:57.120498  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:57.146119  418823 cri.go:89] found id: ""
	I1210 07:52:57.146134  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.146142  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:57.146150  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:57.146160  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:57.160510  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:57.160526  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:57.225221  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:57.216448   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.217062   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.218858   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.219548   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.221235   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:57.216448   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.217062   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.218858   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.219548   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.221235   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:57.225232  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:57.225253  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:57.293765  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:57.293785  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:57.326044  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:57.326061  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:59.896294  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:59.906460  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:59.906522  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:59.930908  418823 cri.go:89] found id: ""
	I1210 07:52:59.930922  418823 logs.go:282] 0 containers: []
	W1210 07:52:59.930930  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:59.930935  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:59.930999  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:59.956028  418823 cri.go:89] found id: ""
	I1210 07:52:59.956042  418823 logs.go:282] 0 containers: []
	W1210 07:52:59.956049  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:59.956054  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:59.956120  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:59.981032  418823 cri.go:89] found id: ""
	I1210 07:52:59.981046  418823 logs.go:282] 0 containers: []
	W1210 07:52:59.981053  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:59.981058  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:59.981116  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:00.027952  418823 cri.go:89] found id: ""
	I1210 07:53:00.027967  418823 logs.go:282] 0 containers: []
	W1210 07:53:00.027975  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:00.027981  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:00.028053  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:00.149242  418823 cri.go:89] found id: ""
	I1210 07:53:00.149275  418823 logs.go:282] 0 containers: []
	W1210 07:53:00.149301  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:00.149308  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:00.149381  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:00.205658  418823 cri.go:89] found id: ""
	I1210 07:53:00.205676  418823 logs.go:282] 0 containers: []
	W1210 07:53:00.205684  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:00.205691  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:00.205842  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:00.272868  418823 cri.go:89] found id: ""
	I1210 07:53:00.272884  418823 logs.go:282] 0 containers: []
	W1210 07:53:00.272892  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:00.272901  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:00.272914  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:00.364734  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:00.349008   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.349679   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.351763   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.352235   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.354014   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:00.349008   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.349679   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.351763   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.352235   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.354014   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:00.364745  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:00.364757  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:00.441561  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:00.441581  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:00.486703  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:00.486722  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:00.551636  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:00.551658  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:03.068015  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:03.078410  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:03.078481  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:03.103362  418823 cri.go:89] found id: ""
	I1210 07:53:03.103378  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.103385  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:03.103391  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:03.103451  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:03.129650  418823 cri.go:89] found id: ""
	I1210 07:53:03.129668  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.129676  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:03.129681  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:03.129753  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:03.156057  418823 cri.go:89] found id: ""
	I1210 07:53:03.156072  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.156079  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:03.156085  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:03.156143  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:03.181869  418823 cri.go:89] found id: ""
	I1210 07:53:03.181895  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.181903  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:03.181908  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:03.181976  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:03.210043  418823 cri.go:89] found id: ""
	I1210 07:53:03.210056  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.210064  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:03.210069  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:03.210148  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:03.234991  418823 cri.go:89] found id: ""
	I1210 07:53:03.235006  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.235046  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:03.235051  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:03.235119  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:03.261578  418823 cri.go:89] found id: ""
	I1210 07:53:03.261605  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.261612  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:03.261620  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:03.261630  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:03.326335  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:03.326355  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:03.340836  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:03.340853  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:03.407609  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:03.398750   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.399755   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.401402   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.401872   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.403636   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:03.398750   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.399755   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.401402   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.401872   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.403636   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:03.407623  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:03.407637  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:03.494941  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:03.494964  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:06.031492  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:06.042260  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:06.042330  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:06.069383  418823 cri.go:89] found id: ""
	I1210 07:53:06.069398  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.069405  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:06.069410  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:06.069471  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:06.095692  418823 cri.go:89] found id: ""
	I1210 07:53:06.095706  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.095713  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:06.095718  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:06.095783  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:06.122565  418823 cri.go:89] found id: ""
	I1210 07:53:06.122579  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.122585  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:06.122590  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:06.122647  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:06.147461  418823 cri.go:89] found id: ""
	I1210 07:53:06.147476  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.147483  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:06.147489  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:06.147549  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:06.172221  418823 cri.go:89] found id: ""
	I1210 07:53:06.172235  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.172243  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:06.172248  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:06.172306  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:06.200403  418823 cri.go:89] found id: ""
	I1210 07:53:06.200417  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.200424  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:06.200429  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:06.200487  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:06.224557  418823 cri.go:89] found id: ""
	I1210 07:53:06.224572  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.224578  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:06.224586  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:06.224597  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:06.285061  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:06.277547   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.278040   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.279487   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.279882   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.281310   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:06.277547   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.278040   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.279487   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.279882   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.281310   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:06.285071  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:06.285082  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:06.351298  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:06.351317  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:06.379592  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:06.379609  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:06.448278  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:06.448298  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:08.966418  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:08.976886  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:08.976953  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:09.010205  418823 cri.go:89] found id: ""
	I1210 07:53:09.010221  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.010248  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:09.010253  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:09.010336  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:09.039128  418823 cri.go:89] found id: ""
	I1210 07:53:09.039143  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.039150  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:09.039155  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:09.039225  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:09.066093  418823 cri.go:89] found id: ""
	I1210 07:53:09.066108  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.066116  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:09.066121  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:09.066218  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:09.091920  418823 cri.go:89] found id: ""
	I1210 07:53:09.091934  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.091948  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:09.091953  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:09.092014  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:09.118286  418823 cri.go:89] found id: ""
	I1210 07:53:09.118301  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.118309  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:09.118314  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:09.118374  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:09.143614  418823 cri.go:89] found id: ""
	I1210 07:53:09.143628  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.143635  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:09.143641  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:09.143705  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:09.168425  418823 cri.go:89] found id: ""
	I1210 07:53:09.168440  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.168447  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:09.168455  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:09.168465  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:09.236920  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:09.236943  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:09.269085  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:09.269103  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:09.339867  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:09.339886  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:09.354523  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:09.354541  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:09.432066  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:09.420805   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.421538   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.423585   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.424589   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.425304   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:09.420805   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.421538   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.423585   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.424589   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.425304   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:11.933763  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:11.943879  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:11.943943  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:11.969555  418823 cri.go:89] found id: ""
	I1210 07:53:11.969578  418823 logs.go:282] 0 containers: []
	W1210 07:53:11.969586  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:11.969591  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:11.969663  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:11.997107  418823 cri.go:89] found id: ""
	I1210 07:53:11.997121  418823 logs.go:282] 0 containers: []
	W1210 07:53:11.997128  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:11.997133  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:11.997198  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:12.025616  418823 cri.go:89] found id: ""
	I1210 07:53:12.025630  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.025638  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:12.025644  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:12.025712  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:12.052893  418823 cri.go:89] found id: ""
	I1210 07:53:12.052906  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.052914  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:12.052919  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:12.052983  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:12.077956  418823 cri.go:89] found id: ""
	I1210 07:53:12.077979  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.077988  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:12.077993  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:12.078064  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:12.104169  418823 cri.go:89] found id: ""
	I1210 07:53:12.104183  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.104200  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:12.104207  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:12.104278  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:12.130790  418823 cri.go:89] found id: ""
	I1210 07:53:12.130804  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.130812  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:12.130819  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:12.130831  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:12.194759  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:12.194778  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:12.209969  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:12.209985  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:12.272708  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:12.265266   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.265649   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.267247   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.267585   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.269013   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:12.265266   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.265649   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.267247   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.267585   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.269013   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:12.272718  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:12.272730  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:12.339739  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:12.339759  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:14.870834  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:14.882996  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:14.883096  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:14.912032  418823 cri.go:89] found id: ""
	I1210 07:53:14.912046  418823 logs.go:282] 0 containers: []
	W1210 07:53:14.912053  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:14.912059  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:14.912116  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:14.937034  418823 cri.go:89] found id: ""
	I1210 07:53:14.937048  418823 logs.go:282] 0 containers: []
	W1210 07:53:14.937056  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:14.937061  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:14.937122  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:14.962165  418823 cri.go:89] found id: ""
	I1210 07:53:14.962180  418823 logs.go:282] 0 containers: []
	W1210 07:53:14.962187  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:14.962192  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:14.962256  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:14.987169  418823 cri.go:89] found id: ""
	I1210 07:53:14.987182  418823 logs.go:282] 0 containers: []
	W1210 07:53:14.987190  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:14.987194  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:14.987250  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:15.026690  418823 cri.go:89] found id: ""
	I1210 07:53:15.026706  418823 logs.go:282] 0 containers: []
	W1210 07:53:15.026714  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:15.026719  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:15.026788  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:15.057882  418823 cri.go:89] found id: ""
	I1210 07:53:15.057896  418823 logs.go:282] 0 containers: []
	W1210 07:53:15.057903  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:15.057908  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:15.057977  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:15.084042  418823 cri.go:89] found id: ""
	I1210 07:53:15.084057  418823 logs.go:282] 0 containers: []
	W1210 07:53:15.084064  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:15.084072  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:15.084082  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:15.114864  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:15.114880  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:15.179901  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:15.179922  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:15.194821  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:15.194838  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:15.259725  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:15.250726   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.251670   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.253338   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.254117   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.255652   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:15.250726   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.251670   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.253338   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.254117   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.255652   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:15.259735  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:15.259747  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:17.826809  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:17.837193  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:17.837254  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:17.863390  418823 cri.go:89] found id: ""
	I1210 07:53:17.863404  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.863411  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:17.863416  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:17.863481  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:17.893221  418823 cri.go:89] found id: ""
	I1210 07:53:17.893236  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.893243  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:17.893248  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:17.893306  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:17.921130  418823 cri.go:89] found id: ""
	I1210 07:53:17.921155  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.921163  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:17.921168  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:17.921236  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:17.945888  418823 cri.go:89] found id: ""
	I1210 07:53:17.945901  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.945909  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:17.945914  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:17.945972  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:17.970988  418823 cri.go:89] found id: ""
	I1210 07:53:17.971002  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.971022  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:17.971027  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:17.971097  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:17.996399  418823 cri.go:89] found id: ""
	I1210 07:53:17.996413  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.996420  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:17.996425  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:17.996494  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:18.023886  418823 cri.go:89] found id: ""
	I1210 07:53:18.023900  418823 logs.go:282] 0 containers: []
	W1210 07:53:18.023908  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:18.023931  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:18.023947  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:18.090117  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:18.090136  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:18.105261  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:18.105280  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:18.174300  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:18.166057   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.166727   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.168471   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.168892   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.170326   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:18.166057   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.166727   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.168471   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.168892   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.170326   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:18.174310  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:18.174322  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:18.241759  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:18.241779  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:20.779144  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:20.788940  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:20.788999  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:20.814543  418823 cri.go:89] found id: ""
	I1210 07:53:20.814557  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.814564  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:20.814569  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:20.814634  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:20.839723  418823 cri.go:89] found id: ""
	I1210 07:53:20.839737  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.839744  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:20.839749  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:20.839808  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:20.869222  418823 cri.go:89] found id: ""
	I1210 07:53:20.869237  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.869244  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:20.869249  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:20.869310  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:20.893562  418823 cri.go:89] found id: ""
	I1210 07:53:20.893576  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.893593  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:20.893598  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:20.893664  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:20.919439  418823 cri.go:89] found id: ""
	I1210 07:53:20.919454  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.919461  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:20.919466  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:20.919526  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:20.947602  418823 cri.go:89] found id: ""
	I1210 07:53:20.947617  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.947624  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:20.947629  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:20.947688  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:20.976621  418823 cri.go:89] found id: ""
	I1210 07:53:20.976635  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.976642  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:20.976650  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:20.976666  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:21.040860  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:21.040884  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:21.055749  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:21.055767  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:21.122414  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:21.114763   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.115197   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.116691   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.116998   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.118463   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:21.114763   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.115197   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.116691   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.116998   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.118463   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:21.122458  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:21.122468  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:21.188312  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:21.188333  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:23.717609  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:23.730817  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:23.730882  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:23.756488  418823 cri.go:89] found id: ""
	I1210 07:53:23.756504  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.756512  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:23.756518  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:23.756584  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:23.782540  418823 cri.go:89] found id: ""
	I1210 07:53:23.782555  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.782562  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:23.782567  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:23.782626  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:23.807181  418823 cri.go:89] found id: ""
	I1210 07:53:23.807195  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.807204  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:23.807209  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:23.807273  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:23.831876  418823 cri.go:89] found id: ""
	I1210 07:53:23.831891  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.831900  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:23.831905  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:23.831964  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:23.858557  418823 cri.go:89] found id: ""
	I1210 07:53:23.858572  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.858580  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:23.858585  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:23.858646  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:23.883797  418823 cri.go:89] found id: ""
	I1210 07:53:23.883811  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.883820  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:23.883825  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:23.883922  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:23.913668  418823 cri.go:89] found id: ""
	I1210 07:53:23.913682  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.913690  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:23.913698  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:23.913709  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:23.977126  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:23.968779   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.969424   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.971050   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.971601   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.973232   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:23.968779   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.969424   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.971050   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.971601   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.973232   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:23.977136  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:23.977147  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:24.045089  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:24.045110  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:24.076143  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:24.076161  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:24.142779  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:24.142798  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:26.658408  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:26.669312  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:26.669374  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:26.697592  418823 cri.go:89] found id: ""
	I1210 07:53:26.697607  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.697615  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:26.697621  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:26.697687  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:26.725323  418823 cri.go:89] found id: ""
	I1210 07:53:26.725363  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.725370  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:26.725375  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:26.725433  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:26.754039  418823 cri.go:89] found id: ""
	I1210 07:53:26.754053  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.754060  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:26.754066  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:26.754122  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:26.788322  418823 cri.go:89] found id: ""
	I1210 07:53:26.788337  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.788344  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:26.788349  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:26.788408  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:26.818143  418823 cri.go:89] found id: ""
	I1210 07:53:26.818157  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.818180  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:26.818185  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:26.818246  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:26.845686  418823 cri.go:89] found id: ""
	I1210 07:53:26.845699  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.845707  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:26.845714  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:26.845772  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:26.871522  418823 cri.go:89] found id: ""
	I1210 07:53:26.871536  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.871544  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:26.871552  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:26.871568  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:26.902527  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:26.902544  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:26.967583  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:26.967603  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:26.982258  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:26.982275  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:27.053700  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:27.045382   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.046023   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.047684   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.048159   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.049667   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:27.045382   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.046023   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.047684   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.048159   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.049667   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:27.053710  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:27.053722  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:29.623259  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:29.633196  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:29.633265  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:29.658246  418823 cri.go:89] found id: ""
	I1210 07:53:29.658271  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.658278  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:29.658283  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:29.658358  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:29.685747  418823 cri.go:89] found id: ""
	I1210 07:53:29.685762  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.685769  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:29.685775  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:29.685842  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:29.721266  418823 cri.go:89] found id: ""
	I1210 07:53:29.721280  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.721288  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:29.721292  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:29.721350  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:29.746632  418823 cri.go:89] found id: ""
	I1210 07:53:29.746647  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.746655  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:29.746660  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:29.746718  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:29.771709  418823 cri.go:89] found id: ""
	I1210 07:53:29.771725  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.771732  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:29.771737  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:29.771800  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:29.801580  418823 cri.go:89] found id: ""
	I1210 07:53:29.801595  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.801602  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:29.801608  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:29.801673  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:29.827750  418823 cri.go:89] found id: ""
	I1210 07:53:29.827764  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.827771  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:29.827780  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:29.827795  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:29.893437  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:29.885346   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.885722   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.887338   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.887997   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.889654   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:29.885346   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.885722   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.887338   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.887997   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.889654   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:29.893447  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:29.893458  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:29.960399  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:29.960419  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:29.991781  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:29.991799  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:30.072819  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:30.072841  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:32.588396  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:32.598821  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:32.598882  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:32.628590  418823 cri.go:89] found id: ""
	I1210 07:53:32.628604  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.628611  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:32.628616  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:32.628678  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:32.658338  418823 cri.go:89] found id: ""
	I1210 07:53:32.658352  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.658359  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:32.658364  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:32.658424  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:32.701705  418823 cri.go:89] found id: ""
	I1210 07:53:32.701719  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.701727  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:32.701732  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:32.701792  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:32.735461  418823 cri.go:89] found id: ""
	I1210 07:53:32.735476  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.735483  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:32.735488  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:32.735548  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:32.761096  418823 cri.go:89] found id: ""
	I1210 07:53:32.761109  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.761116  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:32.761121  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:32.761180  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:32.787468  418823 cri.go:89] found id: ""
	I1210 07:53:32.787481  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.787488  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:32.787493  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:32.787553  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:32.813085  418823 cri.go:89] found id: ""
	I1210 07:53:32.813098  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.813105  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:32.813113  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:32.813123  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:32.881504  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:32.873323   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.874057   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.875714   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.876057   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.877280   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:32.873323   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.874057   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.875714   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.876057   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.877280   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:32.881541  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:32.881552  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:32.951245  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:32.951265  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:32.980096  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:32.980113  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:33.046381  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:33.046400  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:35.561454  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:35.571515  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:35.571579  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:35.596461  418823 cri.go:89] found id: ""
	I1210 07:53:35.596476  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.596483  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:35.596488  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:35.596547  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:35.623764  418823 cri.go:89] found id: ""
	I1210 07:53:35.623780  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.623787  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:35.623792  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:35.623852  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:35.649136  418823 cri.go:89] found id: ""
	I1210 07:53:35.649150  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.649159  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:35.649164  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:35.649267  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:35.689785  418823 cri.go:89] found id: ""
	I1210 07:53:35.689799  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.689806  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:35.689820  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:35.689883  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:35.717073  418823 cri.go:89] found id: ""
	I1210 07:53:35.717086  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.717104  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:35.717109  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:35.717167  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:35.747852  418823 cri.go:89] found id: ""
	I1210 07:53:35.747866  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.747874  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:35.747879  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:35.747936  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:35.772479  418823 cri.go:89] found id: ""
	I1210 07:53:35.772493  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.772500  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:35.772508  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:35.772519  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:35.843052  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:35.843075  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:35.857842  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:35.857859  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:35.927434  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:35.918703   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.919740   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.921421   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.921929   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.923509   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:35.918703   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.919740   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.921421   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.921929   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.923509   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:35.927445  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:35.927457  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:35.996278  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:35.996299  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:38.532848  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:38.543645  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:38.543706  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:38.573367  418823 cri.go:89] found id: ""
	I1210 07:53:38.573382  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.573389  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:38.573394  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:38.573456  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:38.603108  418823 cri.go:89] found id: ""
	I1210 07:53:38.603122  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.603129  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:38.603134  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:38.603193  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:38.629381  418823 cri.go:89] found id: ""
	I1210 07:53:38.629395  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.629402  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:38.629407  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:38.629467  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:38.662313  418823 cri.go:89] found id: ""
	I1210 07:53:38.662327  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.662334  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:38.662339  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:38.662402  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:38.704257  418823 cri.go:89] found id: ""
	I1210 07:53:38.704271  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.704279  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:38.704284  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:38.704346  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:38.734287  418823 cri.go:89] found id: ""
	I1210 07:53:38.734302  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.734309  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:38.734315  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:38.734375  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:38.760452  418823 cri.go:89] found id: ""
	I1210 07:53:38.760467  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.760474  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:38.760483  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:38.760493  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:38.827227  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:38.827248  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:38.841994  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:38.842011  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:38.909535  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:38.899398   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.900949   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.901647   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.903592   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.904308   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:38.899398   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.900949   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.901647   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.903592   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.904308   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:38.909548  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:38.909559  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:38.977890  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:38.977912  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:41.514495  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:41.524880  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:41.524939  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:41.550178  418823 cri.go:89] found id: ""
	I1210 07:53:41.550208  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.550216  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:41.550220  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:41.550289  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:41.578068  418823 cri.go:89] found id: ""
	I1210 07:53:41.578090  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.578097  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:41.578102  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:41.578175  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:41.603754  418823 cri.go:89] found id: ""
	I1210 07:53:41.603768  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.603776  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:41.603782  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:41.603840  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:41.628986  418823 cri.go:89] found id: ""
	I1210 07:53:41.629000  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.629008  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:41.629013  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:41.629072  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:41.654287  418823 cri.go:89] found id: ""
	I1210 07:53:41.654302  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.654309  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:41.654314  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:41.654384  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:41.688416  418823 cri.go:89] found id: ""
	I1210 07:53:41.688430  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.688437  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:41.688442  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:41.688498  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:41.713499  418823 cri.go:89] found id: ""
	I1210 07:53:41.713513  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.713521  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:41.713528  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:41.713538  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:41.730410  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:41.730426  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:41.799336  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:41.790498   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.791386   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.793149   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.793773   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.794989   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:41.790498   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.791386   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.793149   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.793773   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.794989   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:41.799346  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:41.799357  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:41.867347  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:41.867369  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:41.895652  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:41.895669  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:44.462932  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:44.472795  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:44.472854  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:44.504932  418823 cri.go:89] found id: ""
	I1210 07:53:44.504947  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.504960  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:44.504965  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:44.505025  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:44.535103  418823 cri.go:89] found id: ""
	I1210 07:53:44.535125  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.535133  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:44.535138  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:44.535204  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:44.560225  418823 cri.go:89] found id: ""
	I1210 07:53:44.560239  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.560247  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:44.560252  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:44.560310  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:44.585575  418823 cri.go:89] found id: ""
	I1210 07:53:44.585597  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.585604  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:44.585609  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:44.585668  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:44.611737  418823 cri.go:89] found id: ""
	I1210 07:53:44.611751  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.611758  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:44.611763  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:44.611824  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:44.636495  418823 cri.go:89] found id: ""
	I1210 07:53:44.636510  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.636517  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:44.636522  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:44.636580  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:44.665441  418823 cri.go:89] found id: ""
	I1210 07:53:44.665455  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.665463  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:44.665471  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:44.665481  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:44.702032  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:44.702048  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:44.776362  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:44.776383  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:44.792240  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:44.792256  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:44.854270  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:44.846404   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.846945   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.848504   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.848953   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.850457   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:44.846404   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.846945   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.848504   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.848953   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.850457   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:44.854279  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:44.854291  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:47.423978  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:47.436858  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:47.436919  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:47.461997  418823 cri.go:89] found id: ""
	I1210 07:53:47.462011  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.462018  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:47.462023  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:47.462125  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:47.487419  418823 cri.go:89] found id: ""
	I1210 07:53:47.487434  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.487441  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:47.487446  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:47.487504  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:47.512823  418823 cri.go:89] found id: ""
	I1210 07:53:47.512837  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.512845  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:47.512850  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:47.512913  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:47.538819  418823 cri.go:89] found id: ""
	I1210 07:53:47.538833  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.538840  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:47.538845  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:47.538903  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:47.563454  418823 cri.go:89] found id: ""
	I1210 07:53:47.563468  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.563476  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:47.563481  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:47.563544  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:47.588347  418823 cri.go:89] found id: ""
	I1210 07:53:47.588361  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.588368  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:47.588374  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:47.588435  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:47.613835  418823 cri.go:89] found id: ""
	I1210 07:53:47.613848  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.613855  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:47.613863  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:47.613874  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:47.679468  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:47.679488  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:47.695124  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:47.695148  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:47.764330  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:47.755921   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.756582   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.758176   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.758693   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.760262   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:47.755921   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.756582   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.758176   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.758693   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.760262   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:47.764340  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:47.764350  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:47.834926  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:47.834946  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:50.366762  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:50.376894  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:50.376958  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:50.402825  418823 cri.go:89] found id: ""
	I1210 07:53:50.402839  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.402846  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:50.402851  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:50.402912  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:50.431663  418823 cri.go:89] found id: ""
	I1210 07:53:50.431677  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.431685  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:50.431690  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:50.431748  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:50.458799  418823 cri.go:89] found id: ""
	I1210 07:53:50.458813  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.458821  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:50.458826  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:50.458885  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:50.483609  418823 cri.go:89] found id: ""
	I1210 07:53:50.483623  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.483630  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:50.483635  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:50.483693  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:50.509720  418823 cri.go:89] found id: ""
	I1210 07:53:50.509735  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.509743  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:50.509748  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:50.509808  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:50.535475  418823 cri.go:89] found id: ""
	I1210 07:53:50.535489  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.535496  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:50.535501  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:50.535560  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:50.559559  418823 cri.go:89] found id: ""
	I1210 07:53:50.559572  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.559580  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:50.559587  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:50.559598  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:50.624409  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:50.624430  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:50.639099  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:50.639117  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:50.734659  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:50.717698   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.718879   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.720716   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.721025   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.727758   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:50.717698   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.718879   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.720716   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.721025   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.727758   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:50.734673  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:50.734686  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:50.801764  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:50.801789  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:53.334554  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:53.344704  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:53.344767  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:53.369027  418823 cri.go:89] found id: ""
	I1210 07:53:53.369041  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.369049  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:53.369054  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:53.369112  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:53.392884  418823 cri.go:89] found id: ""
	I1210 07:53:53.392897  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.392904  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:53.392909  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:53.392967  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:53.421604  418823 cri.go:89] found id: ""
	I1210 07:53:53.421618  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.421625  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:53.421630  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:53.421690  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:53.446954  418823 cri.go:89] found id: ""
	I1210 07:53:53.446968  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.446976  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:53.446982  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:53.447078  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:53.472681  418823 cri.go:89] found id: ""
	I1210 07:53:53.472696  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.472703  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:53.472708  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:53.472769  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:53.497847  418823 cri.go:89] found id: ""
	I1210 07:53:53.497861  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.497868  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:53.497873  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:53.497934  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:53.524109  418823 cri.go:89] found id: ""
	I1210 07:53:53.524123  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.524131  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:53.524138  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:53.524149  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:53.593506  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:53.593527  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:53.607933  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:53.607950  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:53.678735  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:53.669132   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.671186   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.671527   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.673119   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.673744   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:53.669132   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.671186   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.671527   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.673119   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.673744   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:53.678745  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:53.678755  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:53.752843  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:53.752865  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:56.287368  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:56.297545  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:56.297605  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:56.327438  418823 cri.go:89] found id: ""
	I1210 07:53:56.327452  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.327459  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:56.327465  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:56.327525  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:56.357601  418823 cri.go:89] found id: ""
	I1210 07:53:56.357616  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.357623  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:56.357627  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:56.357686  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:56.382796  418823 cri.go:89] found id: ""
	I1210 07:53:56.382810  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.382817  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:56.382822  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:56.382878  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:56.410018  418823 cri.go:89] found id: ""
	I1210 07:53:56.410032  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.410039  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:56.410050  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:56.410110  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:56.437449  418823 cri.go:89] found id: ""
	I1210 07:53:56.437472  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.437480  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:56.437485  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:56.437551  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:56.462063  418823 cri.go:89] found id: ""
	I1210 07:53:56.462077  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.462096  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:56.462102  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:56.462178  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:56.489728  418823 cri.go:89] found id: ""
	I1210 07:53:56.489743  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.489750  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:56.489757  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:56.489771  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:56.504129  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:56.504145  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:56.569498  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:56.561896   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.562381   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.563902   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.564206   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.565675   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:56.561896   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.562381   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.563902   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.564206   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.565675   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:56.569507  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:56.569518  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:56.638285  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:56.638304  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:56.676473  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:56.676490  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:59.250249  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:59.260346  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:59.260407  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:59.288615  418823 cri.go:89] found id: ""
	I1210 07:53:59.288633  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.288640  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:59.288645  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:59.288707  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:59.314559  418823 cri.go:89] found id: ""
	I1210 07:53:59.314574  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.314581  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:59.314586  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:59.314652  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:59.339212  418823 cri.go:89] found id: ""
	I1210 07:53:59.339227  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.339235  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:59.339240  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:59.339296  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:59.365478  418823 cri.go:89] found id: ""
	I1210 07:53:59.365493  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.365500  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:59.365505  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:59.365565  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:59.391116  418823 cri.go:89] found id: ""
	I1210 07:53:59.391131  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.391138  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:59.391143  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:59.391204  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:59.417133  418823 cri.go:89] found id: ""
	I1210 07:53:59.417153  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.417161  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:59.417166  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:59.417225  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:59.442940  418823 cri.go:89] found id: ""
	I1210 07:53:59.442954  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.442961  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:59.442968  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:59.442979  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:59.509257  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:59.509277  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:59.541319  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:59.541335  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:59.607451  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:59.607470  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:59.621934  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:59.621951  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:59.693437  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:59.684845   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.685597   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.687307   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.687819   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.689392   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:59.684845   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.685597   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.687307   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.687819   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.689392   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:02.193693  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:02.204795  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:02.204860  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:02.230168  418823 cri.go:89] found id: ""
	I1210 07:54:02.230185  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.230192  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:02.230198  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:02.230311  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:02.263333  418823 cri.go:89] found id: ""
	I1210 07:54:02.263349  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.263356  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:02.263361  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:02.263426  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:02.290361  418823 cri.go:89] found id: ""
	I1210 07:54:02.290376  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.290384  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:02.290388  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:02.290448  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:02.316861  418823 cri.go:89] found id: ""
	I1210 07:54:02.316875  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.316882  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:02.316894  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:02.316951  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:02.343227  418823 cri.go:89] found id: ""
	I1210 07:54:02.343242  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.343250  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:02.343255  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:02.343319  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:02.370541  418823 cri.go:89] found id: ""
	I1210 07:54:02.370555  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.370562  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:02.370567  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:02.370655  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:02.397479  418823 cri.go:89] found id: ""
	I1210 07:54:02.397493  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.397500  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:02.397508  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:02.397522  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:02.463725  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:02.463746  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:02.478295  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:02.478312  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:02.550548  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:02.542348   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.542913   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.544446   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.544888   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.546428   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:02.542348   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.542913   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.544446   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.544888   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.546428   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:02.550558  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:02.550569  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:02.620103  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:02.620125  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:05.149959  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:05.160417  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:05.160482  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:05.189797  418823 cri.go:89] found id: ""
	I1210 07:54:05.189812  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.189826  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:05.189831  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:05.189890  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:05.217788  418823 cri.go:89] found id: ""
	I1210 07:54:05.217815  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.217823  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:05.217828  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:05.217893  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:05.243664  418823 cri.go:89] found id: ""
	I1210 07:54:05.243678  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.243686  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:05.243690  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:05.243749  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:05.269052  418823 cri.go:89] found id: ""
	I1210 07:54:05.269067  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.269075  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:05.269080  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:05.269140  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:05.294538  418823 cri.go:89] found id: ""
	I1210 07:54:05.294552  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.294559  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:05.294564  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:05.294627  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:05.321865  418823 cri.go:89] found id: ""
	I1210 07:54:05.321880  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.321887  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:05.321893  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:05.321954  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:05.348181  418823 cri.go:89] found id: ""
	I1210 07:54:05.348195  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.348203  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:05.348210  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:05.348225  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:05.379036  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:05.379062  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:05.443960  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:05.443981  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:05.458603  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:05.458620  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:05.526883  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:05.519093   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.519877   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.521585   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.522069   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.523123   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:05.519093   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.519877   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.521585   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.522069   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.523123   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:05.526895  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:05.526910  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:08.095997  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:08.105932  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:08.105991  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:08.130974  418823 cri.go:89] found id: ""
	I1210 07:54:08.130988  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.130996  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:08.131001  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:08.131153  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:08.155374  418823 cri.go:89] found id: ""
	I1210 07:54:08.155388  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.155396  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:08.155401  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:08.155458  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:08.180878  418823 cri.go:89] found id: ""
	I1210 07:54:08.180892  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.180899  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:08.180904  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:08.180962  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:08.209651  418823 cri.go:89] found id: ""
	I1210 07:54:08.209664  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.209672  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:08.209676  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:08.209735  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:08.235331  418823 cri.go:89] found id: ""
	I1210 07:54:08.235344  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.235358  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:08.235362  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:08.235421  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:08.260980  418823 cri.go:89] found id: ""
	I1210 07:54:08.260995  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.261003  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:08.261008  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:08.261066  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:08.286809  418823 cri.go:89] found id: ""
	I1210 07:54:08.286824  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.286831  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:08.286838  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:08.286848  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:08.353470  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:08.353491  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:08.367911  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:08.367928  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:08.434091  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:08.426418   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.426959   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.428418   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.428869   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.430318   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:08.426418   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.426959   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.428418   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.428869   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.430318   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:08.434101  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:08.434120  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:08.502201  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:08.502221  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:11.031209  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:11.041439  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:11.041500  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:11.067253  418823 cri.go:89] found id: ""
	I1210 07:54:11.067268  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.067275  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:11.067280  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:11.067339  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:11.092951  418823 cri.go:89] found id: ""
	I1210 07:54:11.092965  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.092972  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:11.092978  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:11.093038  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:11.118430  418823 cri.go:89] found id: ""
	I1210 07:54:11.118445  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.118453  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:11.118458  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:11.118520  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:11.144820  418823 cri.go:89] found id: ""
	I1210 07:54:11.144835  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.144843  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:11.144848  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:11.144914  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:11.173374  418823 cri.go:89] found id: ""
	I1210 07:54:11.173388  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.173396  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:11.173401  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:11.173459  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:11.198352  418823 cri.go:89] found id: ""
	I1210 07:54:11.198367  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.198375  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:11.198380  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:11.198450  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:11.224536  418823 cri.go:89] found id: ""
	I1210 07:54:11.224550  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.224559  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:11.224569  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:11.224579  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:11.290262  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:11.290283  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:11.304639  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:11.304658  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:11.368924  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:11.359948   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.360716   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.362347   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.362996   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.364714   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:11.359948   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.360716   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.362347   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.362996   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.364714   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:11.368934  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:11.368944  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:11.435589  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:11.435610  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:13.966356  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:13.976957  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:13.977022  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:14.004519  418823 cri.go:89] found id: ""
	I1210 07:54:14.004536  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.004546  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:14.004551  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:14.004633  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:14.033357  418823 cri.go:89] found id: ""
	I1210 07:54:14.033372  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.033380  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:14.033385  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:14.033445  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:14.059488  418823 cri.go:89] found id: ""
	I1210 07:54:14.059510  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.059517  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:14.059522  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:14.059585  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:14.087964  418823 cri.go:89] found id: ""
	I1210 07:54:14.087987  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.087996  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:14.088002  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:14.088073  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:14.114469  418823 cri.go:89] found id: ""
	I1210 07:54:14.114483  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.114501  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:14.114507  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:14.114580  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:14.144394  418823 cri.go:89] found id: ""
	I1210 07:54:14.144408  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.144415  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:14.144420  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:14.144482  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:14.173724  418823 cri.go:89] found id: ""
	I1210 07:54:14.173746  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.173754  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:14.173762  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:14.173779  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:14.247855  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:14.239156   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.239953   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.241733   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.242311   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.243958   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:14.239156   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.239953   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.241733   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.242311   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.243958   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:14.247865  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:14.247879  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:14.317778  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:14.317798  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:14.346568  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:14.346586  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:14.412678  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:14.412697  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:16.927406  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:16.938842  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:16.938903  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:16.972184  418823 cri.go:89] found id: ""
	I1210 07:54:16.972197  418823 logs.go:282] 0 containers: []
	W1210 07:54:16.972204  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:16.972209  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:16.972268  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:16.999114  418823 cri.go:89] found id: ""
	I1210 07:54:16.999129  418823 logs.go:282] 0 containers: []
	W1210 07:54:16.999136  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:16.999141  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:16.999204  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:17.026900  418823 cri.go:89] found id: ""
	I1210 07:54:17.026913  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.026921  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:17.026926  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:17.026985  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:17.053121  418823 cri.go:89] found id: ""
	I1210 07:54:17.053135  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.053143  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:17.053148  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:17.053208  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:17.079184  418823 cri.go:89] found id: ""
	I1210 07:54:17.079198  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.079204  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:17.079209  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:17.079268  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:17.104597  418823 cri.go:89] found id: ""
	I1210 07:54:17.104611  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.104619  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:17.104624  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:17.104681  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:17.133412  418823 cri.go:89] found id: ""
	I1210 07:54:17.133426  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.133434  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:17.133441  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:17.133452  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:17.147432  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:17.147452  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:17.210612  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:17.202657   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.203522   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.205130   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.205458   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.206975   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:17.202657   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.203522   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.205130   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.205458   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.206975   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:17.210623  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:17.210634  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:17.279473  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:17.279493  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:17.307828  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:17.307852  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:19.881299  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:19.891315  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:19.891375  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:19.926287  418823 cri.go:89] found id: ""
	I1210 07:54:19.926302  418823 logs.go:282] 0 containers: []
	W1210 07:54:19.926309  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:19.926314  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:19.926373  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:19.961020  418823 cri.go:89] found id: ""
	I1210 07:54:19.961036  418823 logs.go:282] 0 containers: []
	W1210 07:54:19.961043  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:19.961048  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:19.961111  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:19.994369  418823 cri.go:89] found id: ""
	I1210 07:54:19.994383  418823 logs.go:282] 0 containers: []
	W1210 07:54:19.994390  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:19.994395  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:19.994455  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:20.028896  418823 cri.go:89] found id: ""
	I1210 07:54:20.028911  418823 logs.go:282] 0 containers: []
	W1210 07:54:20.028919  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:20.028924  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:20.028989  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:20.059934  418823 cri.go:89] found id: ""
	I1210 07:54:20.059955  418823 logs.go:282] 0 containers: []
	W1210 07:54:20.059963  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:20.060015  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:20.060093  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:20.086606  418823 cri.go:89] found id: ""
	I1210 07:54:20.086622  418823 logs.go:282] 0 containers: []
	W1210 07:54:20.086629  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:20.086635  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:20.086703  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:20.112469  418823 cri.go:89] found id: ""
	I1210 07:54:20.112486  418823 logs.go:282] 0 containers: []
	W1210 07:54:20.112496  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:20.112504  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:20.112515  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:20.176933  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:20.176953  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:20.193125  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:20.193142  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:20.257603  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:20.249385   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.249848   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.251552   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.251918   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.253488   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:20.249385   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.249848   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.251552   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.251918   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.253488   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:20.257614  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:20.257625  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:20.324617  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:20.324638  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:22.853766  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:22.864101  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:22.864164  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:22.888959  418823 cri.go:89] found id: ""
	I1210 07:54:22.888974  418823 logs.go:282] 0 containers: []
	W1210 07:54:22.888981  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:22.888986  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:22.889046  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:22.921447  418823 cri.go:89] found id: ""
	I1210 07:54:22.921460  418823 logs.go:282] 0 containers: []
	W1210 07:54:22.921468  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:22.921473  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:22.921543  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:22.955505  418823 cri.go:89] found id: ""
	I1210 07:54:22.955519  418823 logs.go:282] 0 containers: []
	W1210 07:54:22.955526  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:22.955531  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:22.955594  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:22.986982  418823 cri.go:89] found id: ""
	I1210 07:54:22.986996  418823 logs.go:282] 0 containers: []
	W1210 07:54:22.987004  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:22.987031  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:22.987094  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:23.016264  418823 cri.go:89] found id: ""
	I1210 07:54:23.016279  418823 logs.go:282] 0 containers: []
	W1210 07:54:23.016286  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:23.016291  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:23.016354  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:23.046460  418823 cri.go:89] found id: ""
	I1210 07:54:23.046474  418823 logs.go:282] 0 containers: []
	W1210 07:54:23.046482  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:23.046507  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:23.046577  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:23.074337  418823 cri.go:89] found id: ""
	I1210 07:54:23.074352  418823 logs.go:282] 0 containers: []
	W1210 07:54:23.074361  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:23.074369  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:23.074384  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:23.139358  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:23.139380  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:23.154211  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:23.154233  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:23.215488  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:23.206516   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.207119   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.208073   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.209680   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.210273   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:23.206516   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.207119   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.208073   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.209680   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.210273   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:23.215499  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:23.215512  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:23.282950  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:23.282971  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:25.812054  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:25.822192  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:25.822255  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:25.847807  418823 cri.go:89] found id: ""
	I1210 07:54:25.847822  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.847831  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:25.847836  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:25.847900  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:25.876611  418823 cri.go:89] found id: ""
	I1210 07:54:25.876626  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.876634  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:25.876638  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:25.876698  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:25.902947  418823 cri.go:89] found id: ""
	I1210 07:54:25.902961  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.902968  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:25.902973  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:25.903056  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:25.944041  418823 cri.go:89] found id: ""
	I1210 07:54:25.944055  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.944062  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:25.944068  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:25.944128  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:25.970835  418823 cri.go:89] found id: ""
	I1210 07:54:25.970849  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.970857  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:25.970862  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:25.970923  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:26.003198  418823 cri.go:89] found id: ""
	I1210 07:54:26.003214  418823 logs.go:282] 0 containers: []
	W1210 07:54:26.003222  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:26.003228  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:26.003300  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:26.032526  418823 cri.go:89] found id: ""
	I1210 07:54:26.032540  418823 logs.go:282] 0 containers: []
	W1210 07:54:26.032548  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:26.032556  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:26.032569  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:26.099635  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:26.099655  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:26.114354  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:26.114373  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:26.179258  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:26.170492   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.171349   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.173076   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.173401   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.174907   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:26.170492   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.171349   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.173076   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.173401   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.174907   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:26.179269  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:26.179281  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:26.248336  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:26.248355  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:28.782480  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:28.792391  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:28.792450  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:28.817311  418823 cri.go:89] found id: ""
	I1210 07:54:28.817325  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.817332  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:28.817338  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:28.817393  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:28.841584  418823 cri.go:89] found id: ""
	I1210 07:54:28.841597  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.841605  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:28.841609  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:28.841666  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:28.867004  418823 cri.go:89] found id: ""
	I1210 07:54:28.867040  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.867048  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:28.867052  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:28.867110  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:28.891591  418823 cri.go:89] found id: ""
	I1210 07:54:28.891604  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.891615  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:28.891621  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:28.891677  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:28.927624  418823 cri.go:89] found id: ""
	I1210 07:54:28.927637  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.927645  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:28.927650  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:28.927714  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:28.955409  418823 cri.go:89] found id: ""
	I1210 07:54:28.955423  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.955430  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:28.955435  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:28.955493  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:28.980779  418823 cri.go:89] found id: ""
	I1210 07:54:28.980794  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.980801  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:28.980808  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:28.980819  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:28.995862  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:28.995878  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:29.065674  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:29.057420   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.058224   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.059795   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.060097   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.061321   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:29.057420   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.058224   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.059795   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.060097   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.061321   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:29.065683  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:29.065695  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:29.133594  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:29.133615  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:29.165522  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:29.165539  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:31.733707  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:31.743741  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:31.743803  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:31.768618  418823 cri.go:89] found id: ""
	I1210 07:54:31.768633  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.768647  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:31.768652  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:31.768712  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:31.797641  418823 cri.go:89] found id: ""
	I1210 07:54:31.797656  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.797663  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:31.797668  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:31.797729  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:31.823152  418823 cri.go:89] found id: ""
	I1210 07:54:31.823166  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.823174  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:31.823178  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:31.823241  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:31.849644  418823 cri.go:89] found id: ""
	I1210 07:54:31.849659  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.849666  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:31.849671  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:31.849735  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:31.877522  418823 cri.go:89] found id: ""
	I1210 07:54:31.877545  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.877553  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:31.877558  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:31.877625  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:31.903129  418823 cri.go:89] found id: ""
	I1210 07:54:31.903142  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.903150  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:31.903155  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:31.903212  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:31.941362  418823 cri.go:89] found id: ""
	I1210 07:54:31.941376  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.941383  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:31.941391  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:31.941402  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:32.025544  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:32.025566  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:32.040949  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:32.040969  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:32.110721  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:32.101989   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.102773   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.104458   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.104789   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.106701   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:32.101989   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.102773   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.104458   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.104789   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.106701   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:32.110732  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:32.110743  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:32.178647  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:32.178670  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:34.707070  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:34.717245  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:34.717310  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:34.745693  418823 cri.go:89] found id: ""
	I1210 07:54:34.745707  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.745714  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:34.745726  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:34.745790  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:34.771395  418823 cri.go:89] found id: ""
	I1210 07:54:34.771409  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.771416  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:34.771421  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:34.771479  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:34.797775  418823 cri.go:89] found id: ""
	I1210 07:54:34.797788  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.797796  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:34.797801  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:34.797861  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:34.825083  418823 cri.go:89] found id: ""
	I1210 07:54:34.825100  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.825107  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:34.825112  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:34.825177  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:34.850864  418823 cri.go:89] found id: ""
	I1210 07:54:34.850879  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.850896  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:34.850901  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:34.850975  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:34.875132  418823 cri.go:89] found id: ""
	I1210 07:54:34.875146  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.875154  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:34.875159  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:34.875227  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:34.899938  418823 cri.go:89] found id: ""
	I1210 07:54:34.899953  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.899970  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:34.899979  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:34.899990  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:34.923898  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:34.923916  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:35.004342  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:34.994993   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.995801   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.997355   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.997871   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.999627   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:34.994993   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.995801   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.997355   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.997871   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.999627   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:35.004372  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:35.004385  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:35.076257  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:35.076279  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:35.104842  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:35.104858  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:37.672039  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:37.681946  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:37.682009  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:37.706328  418823 cri.go:89] found id: ""
	I1210 07:54:37.706342  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.706349  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:37.706354  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:37.706420  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:37.731157  418823 cri.go:89] found id: ""
	I1210 07:54:37.731171  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.731179  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:37.731183  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:37.731243  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:37.756672  418823 cri.go:89] found id: ""
	I1210 07:54:37.756686  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.756693  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:37.756698  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:37.756758  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:37.782323  418823 cri.go:89] found id: ""
	I1210 07:54:37.782337  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.782344  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:37.782349  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:37.782407  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:37.809398  418823 cri.go:89] found id: ""
	I1210 07:54:37.809411  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.809425  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:37.809430  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:37.809488  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:37.834279  418823 cri.go:89] found id: ""
	I1210 07:54:37.834300  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.834307  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:37.834311  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:37.834378  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:37.860329  418823 cri.go:89] found id: ""
	I1210 07:54:37.860343  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.860351  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:37.860359  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:37.860369  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:37.933541  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:37.923454   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.924390   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.926115   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.926751   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.928659   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:37.923454   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.924390   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.926115   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.926751   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.928659   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:37.933553  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:37.933564  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:38.012971  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:38.012996  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:38.049266  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:38.049284  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:38.124985  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:38.125006  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:40.640115  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:40.651783  418823 kubeadm.go:602] duration metric: took 4m3.269334188s to restartPrimaryControlPlane
	W1210 07:54:40.651842  418823 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 07:54:40.651915  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 07:54:41.061132  418823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:54:41.073851  418823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:54:41.081733  418823 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:54:41.081788  418823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:54:41.089443  418823 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:54:41.089453  418823 kubeadm.go:158] found existing configuration files:
	
	I1210 07:54:41.089505  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 07:54:41.097510  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:54:41.097570  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:54:41.105078  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 07:54:41.112622  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:54:41.112682  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:54:41.120112  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 07:54:41.127831  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:54:41.127887  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:54:41.135843  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 07:54:41.143605  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:54:41.143662  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:54:41.150893  418823 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:54:41.188283  418823 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:54:41.188576  418823 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:54:41.266308  418823 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:54:41.266369  418823 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:54:41.266407  418823 kubeadm.go:319] OS: Linux
	I1210 07:54:41.266448  418823 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:54:41.266493  418823 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:54:41.266536  418823 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:54:41.266581  418823 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:54:41.266627  418823 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:54:41.266672  418823 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:54:41.266714  418823 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:54:41.266758  418823 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:54:41.266801  418823 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:54:41.327793  418823 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:54:41.327890  418823 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:54:41.327975  418823 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:54:41.335492  418823 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:54:41.340870  418823 out.go:252]   - Generating certificates and keys ...
	I1210 07:54:41.340961  418823 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:54:41.341031  418823 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:54:41.341119  418823 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:54:41.341186  418823 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:54:41.341262  418823 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:54:41.341320  418823 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:54:41.341398  418823 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:54:41.341465  418823 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:54:41.341545  418823 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:54:41.341622  418823 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:54:41.341659  418823 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:54:41.341719  418823 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:54:41.831104  418823 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:54:41.953522  418823 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:54:42.205323  418823 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:54:42.449785  418823 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:54:42.618213  418823 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:54:42.619047  418823 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:54:42.621575  418823 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:54:42.624790  418823 out.go:252]   - Booting up control plane ...
	I1210 07:54:42.624883  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:54:42.624959  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:54:42.625035  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:54:42.639751  418823 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:54:42.639880  418823 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:54:42.648702  418823 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:54:42.648797  418823 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:54:42.648841  418823 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:54:42.779710  418823 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:54:42.779857  418823 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:58:42.778273  418823 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000214333s
	I1210 07:58:42.778318  418823 kubeadm.go:319] 
	I1210 07:58:42.778386  418823 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:58:42.778418  418823 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:58:42.778523  418823 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:58:42.778528  418823 kubeadm.go:319] 
	I1210 07:58:42.778632  418823 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:58:42.778679  418823 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:58:42.778709  418823 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:58:42.778712  418823 kubeadm.go:319] 
	I1210 07:58:42.783355  418823 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:58:42.783807  418823 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:58:42.783918  418823 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:58:42.784153  418823 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:58:42.784159  418823 kubeadm.go:319] 
	I1210 07:58:42.784227  418823 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:58:42.784352  418823 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000214333s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:58:42.784459  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 07:58:43.198112  418823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:58:43.211996  418823 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:58:43.212056  418823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:58:43.219732  418823 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:58:43.219740  418823 kubeadm.go:158] found existing configuration files:
	
	I1210 07:58:43.219791  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 07:58:43.228096  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:58:43.228153  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:58:43.235851  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 07:58:43.244105  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:58:43.244161  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:58:43.252172  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 07:58:43.259776  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:58:43.259838  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:58:43.267182  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 07:58:43.274881  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:58:43.274939  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:58:43.282494  418823 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:58:43.323208  418823 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:58:43.323257  418823 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:58:43.392495  418823 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:58:43.392566  418823 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:58:43.392605  418823 kubeadm.go:319] OS: Linux
	I1210 07:58:43.392653  418823 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:58:43.392700  418823 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:58:43.392753  418823 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:58:43.392806  418823 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:58:43.392856  418823 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:58:43.392902  418823 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:58:43.392950  418823 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:58:43.392997  418823 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:58:43.393041  418823 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:58:43.459397  418823 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:58:43.459500  418823 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:58:43.459594  418823 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:58:43.467473  418823 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:58:43.472849  418823 out.go:252]   - Generating certificates and keys ...
	I1210 07:58:43.472935  418823 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:58:43.472999  418823 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:58:43.473075  418823 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:58:43.473135  418823 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:58:43.473203  418823 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:58:43.473256  418823 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:58:43.473324  418823 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:58:43.473385  418823 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:58:43.474012  418823 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:58:43.474414  418823 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:58:43.474604  418823 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:58:43.474667  418823 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:58:43.690916  418823 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:58:43.922489  418823 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:58:44.055635  418823 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:58:44.187430  418823 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:58:44.228570  418823 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:58:44.229295  418823 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:58:44.233140  418823 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:58:44.236201  418823 out.go:252]   - Booting up control plane ...
	I1210 07:58:44.236295  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:58:44.236371  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:58:44.236933  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:58:44.251863  418823 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:58:44.251964  418823 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:58:44.259287  418823 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:58:44.259598  418823 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:58:44.259801  418823 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:58:44.391514  418823 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:58:44.391627  418823 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 08:02:44.389879  418823 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00019224s
	I1210 08:02:44.389912  418823 kubeadm.go:319] 
	I1210 08:02:44.389980  418823 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 08:02:44.390013  418823 kubeadm.go:319] 	- The kubelet is not running
	I1210 08:02:44.390123  418823 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 08:02:44.390155  418823 kubeadm.go:319] 
	I1210 08:02:44.390271  418823 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 08:02:44.390303  418823 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 08:02:44.390331  418823 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 08:02:44.390335  418823 kubeadm.go:319] 
	I1210 08:02:44.395328  418823 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 08:02:44.395720  418823 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 08:02:44.395823  418823 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 08:02:44.396068  418823 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 08:02:44.396072  418823 kubeadm.go:319] 
	I1210 08:02:44.396138  418823 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 08:02:44.396188  418823 kubeadm.go:403] duration metric: took 12m7.052327562s to StartCluster
	I1210 08:02:44.396219  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:02:44.396280  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:02:44.421374  418823 cri.go:89] found id: ""
	I1210 08:02:44.421389  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.421396  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:02:44.421401  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:02:44.421463  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:02:44.447342  418823 cri.go:89] found id: ""
	I1210 08:02:44.447356  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.447363  418823 logs.go:284] No container was found matching "etcd"
	I1210 08:02:44.447368  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:02:44.447429  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:02:44.472601  418823 cri.go:89] found id: ""
	I1210 08:02:44.472614  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.472621  418823 logs.go:284] No container was found matching "coredns"
	I1210 08:02:44.472627  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:02:44.472684  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:02:44.501973  418823 cri.go:89] found id: ""
	I1210 08:02:44.501986  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.501993  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:02:44.502000  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:02:44.502059  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:02:44.527997  418823 cri.go:89] found id: ""
	I1210 08:02:44.528011  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.528018  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:02:44.528023  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:02:44.528083  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:02:44.558353  418823 cri.go:89] found id: ""
	I1210 08:02:44.558367  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.558374  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:02:44.558379  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:02:44.558439  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:02:44.583751  418823 cri.go:89] found id: ""
	I1210 08:02:44.583764  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.583771  418823 logs.go:284] No container was found matching "kindnet"
	I1210 08:02:44.583780  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 08:02:44.583792  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:02:44.598048  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:02:44.598065  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:02:44.670126  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 08:02:44.661736   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.662310   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.664031   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.664536   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.666240   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 08:02:44.661736   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.662310   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.664031   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.664536   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.666240   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:02:44.670142  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:02:44.670153  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:02:44.741133  418823 logs.go:123] Gathering logs for container status ...
	I1210 08:02:44.741153  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:02:44.768780  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 08:02:44.768797  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 08:02:44.836964  418823 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00019224s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 08:02:44.837011  418823 out.go:285] * 
	W1210 08:02:44.837080  418823 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00019224s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 08:02:44.837155  418823 out.go:285] * 
	W1210 08:02:44.839300  418823 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 08:02:44.844978  418823 out.go:203] 
	W1210 08:02:44.848781  418823 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00019224s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 08:02:44.848820  418823 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 08:02:44.848841  418823 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 08:02:44.852612  418823 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.462758438Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=30754360-77fc-41d9-961a-703309105bf4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.463612109Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=ce58a9d0-ec5e-41a7-a162-73ed5f175442 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.464131886Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=af48f6b4-c6c8-458a-8d08-3443ae3e881b name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.464662517Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=4191b9f9-c176-40bc-b3bb-ec0edd3076c8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.465135606Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=eaddf230-cd32-4499-a396-5bbd1b1cb31a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.465587147Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=300cd477-ebce-4fed-8c84-bc9781d52848 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.466022016Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=1367fa60-098e-4704-b6f3-b114a75d5405 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.067588181Z" level=info msg="Checking image status: kicbase/echo-server:functional-314220" id=9cb916ae-b10a-45e5-9f16-e18af053130e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.067769762Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.067811068Z" level=info msg="Image kicbase/echo-server:functional-314220 not found" id=9cb916ae-b10a-45e5-9f16-e18af053130e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.067872754Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-314220 found" id=9cb916ae-b10a-45e5-9f16-e18af053130e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.095996195Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-314220" id=b1a2f38f-ca46-4e03-a345-6e3600d518ba name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.096290073Z" level=info msg="Image docker.io/kicbase/echo-server:functional-314220 not found" id=b1a2f38f-ca46-4e03-a345-6e3600d518ba name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.09635135Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-314220 found" id=b1a2f38f-ca46-4e03-a345-6e3600d518ba name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.132021615Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-314220" id=2edafe31-0e02-4fc2-9f4f-edcaf98b26aa name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.132192218Z" level=info msg="Image localhost/kicbase/echo-server:functional-314220 not found" id=2edafe31-0e02-4fc2-9f4f-edcaf98b26aa name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.13224538Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-314220 found" id=2edafe31-0e02-4fc2-9f4f-edcaf98b26aa name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.091386609Z" level=info msg="Checking image status: kicbase/echo-server:functional-314220" id=48a93da4-5ee9-4739-b4c1-47bc2a67ca0c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.091538619Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.091581639Z" level=info msg="Image kicbase/echo-server:functional-314220 not found" id=48a93da4-5ee9-4739-b4c1-47bc2a67ca0c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.091640035Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-314220 found" id=48a93da4-5ee9-4739-b4c1-47bc2a67ca0c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.133859584Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-314220" id=f78161c9-d7b6-42a0-b3d9-39f237b54b13 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.133994224Z" level=info msg="Image docker.io/kicbase/echo-server:functional-314220 not found" id=f78161c9-d7b6-42a0-b3d9-39f237b54b13 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.134034315Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-314220 found" id=f78161c9-d7b6-42a0-b3d9-39f237b54b13 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.166199113Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-314220" id=16f24585-97cc-4a0b-a37c-9ad94456e987 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 08:04:55.620655   23347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:04:55.621314   23347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:04:55.622880   23347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:04:55.623384   23347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:04:55.625114   23347 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	[Dec10 07:11] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:23] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:25] overlayfs: idmapped layers are currently not supported
	[  +0.065949] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 07:31] overlayfs: idmapped layers are currently not supported
	[Dec10 07:32] overlayfs: idmapped layers are currently not supported
	[Dec10 07:50] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 08:04:55 up  2:47,  0 user,  load average: 0.09, 0.20, 0.44
	Linux functional-314220 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 08:04:53 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:04:53 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 492.
	Dec 10 08:04:53 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:04:53 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:04:53 functional-314220 kubelet[23237]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:04:53 functional-314220 kubelet[23237]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:04:53 functional-314220 kubelet[23237]: E1210 08:04:53.975565   23237 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:04:53 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:04:53 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:04:54 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 493.
	Dec 10 08:04:54 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:04:54 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:04:54 functional-314220 kubelet[23250]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:04:54 functional-314220 kubelet[23250]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:04:54 functional-314220 kubelet[23250]: E1210 08:04:54.719045   23250 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:04:54 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:04:54 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:04:55 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 494.
	Dec 10 08:04:55 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:04:55 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:04:55 functional-314220 kubelet[23306]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:04:55 functional-314220 kubelet[23306]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:04:55 functional-314220 kubelet[23306]: E1210 08:04:55.486497   23306 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:04:55 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:04:55 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220: exit status 2 (359.646262ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-314220" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1210 08:03:11.734558  378528 retry.go:31] will retry after 4.384770364s: Temporary Error: Get "http://10.108.159.154": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1210 08:03:26.121090  378528 retry.go:31] will retry after 3.61422874s: Temporary Error: Get "http://10.108.159.154": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1210 08:03:39.736460  378528 retry.go:31] will retry after 9.251914223s: Temporary Error: Get "http://10.108.159.154": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1210 08:03:58.988768  378528 retry.go:31] will retry after 14.219399825s: Temporary Error: Get "http://10.108.159.154": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1210 08:04:23.209929  378528 retry.go:31] will retry after 20.715465514s: Temporary Error: Get "http://10.108.159.154": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1210 08:05:30.873530  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220: exit status 2 (314.811222ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-314220" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-314220
helpers_test.go:244: (dbg) docker inspect functional-314220:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	        "Created": "2025-12-10T07:35:53.582567333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 407506,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:35:53.640615938Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hostname",
	        "HostsPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hosts",
	        "LogPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30-json.log",
	        "Name": "/functional-314220",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-314220:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-314220",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	                "LowerDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35-init/diff:/var/lib/docker/overlay2/888a54fd4518421bb2e30bc2f0825f232bffa2f260991e8ba288270662a6554b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-314220",
	                "Source": "/var/lib/docker/volumes/functional-314220/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-314220",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-314220",
	                "name.minikube.sigs.k8s.io": "functional-314220",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a7384df8731cf8cd48ab1dfb3eb79fa2c44cd426e6255cad7345dab2e19d0bfd",
	            "SandboxKey": "/var/run/docker/netns/a7384df8731c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-314220": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:6b:8c:59:cb:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "854df014d6f98ea92f9938b53e3af35a6df7a455941ebb6e97efa575a4d35230",
	                    "EndpointID": "285d153b9d616bc3d729ab764c8a3d3c5b28bb20787fa60a80cbf50869c0c24b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-314220",
	                        "82cb4926192d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220: exit status 2 (335.690755ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                     ARGS                                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-314220 ssh findmnt -T /mount-9p | grep 9p                                                                                          │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	│ ssh            │ functional-314220 ssh -- ls -la /mount-9p                                                                                                     │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	│ ssh            │ functional-314220 ssh sudo umount -f /mount-9p                                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ mount          │ -p functional-314220 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2494990345/001:/mount1 --alsologtostderr -v=1          │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ ssh            │ functional-314220 ssh findmnt -T /mount1                                                                                                      │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ mount          │ -p functional-314220 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2494990345/001:/mount3 --alsologtostderr -v=1          │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ mount          │ -p functional-314220 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2494990345/001:/mount2 --alsologtostderr -v=1          │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ ssh            │ functional-314220 ssh findmnt -T /mount1                                                                                                      │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	│ ssh            │ functional-314220 ssh findmnt -T /mount2                                                                                                      │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	│ ssh            │ functional-314220 ssh findmnt -T /mount3                                                                                                      │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	│ mount          │ -p functional-314220 --kill=true                                                                                                              │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ start          │ -p functional-314220 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ start          │ -p functional-314220 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0           │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ start          │ -p functional-314220 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-314220 --alsologtostderr -v=1                                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ update-context │ functional-314220 update-context --alsologtostderr -v=2                                                                                       │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	│ update-context │ functional-314220 update-context --alsologtostderr -v=2                                                                                       │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	│ update-context │ functional-314220 update-context --alsologtostderr -v=2                                                                                       │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	│ image          │ functional-314220 image ls --format short --alsologtostderr                                                                                   │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	│ image          │ functional-314220 image ls --format yaml --alsologtostderr                                                                                    │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	│ ssh            │ functional-314220 ssh pgrep buildkitd                                                                                                         │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │                     │
	│ image          │ functional-314220 image build -t localhost/my-image:functional-314220 testdata/build --alsologtostderr                                        │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	│ image          │ functional-314220 image ls                                                                                                                    │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	│ image          │ functional-314220 image ls --format json --alsologtostderr                                                                                    │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	│ image          │ functional-314220 image ls --format table --alsologtostderr                                                                                   │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:05 UTC │ 10 Dec 25 08:05 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 08:05:07
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 08:05:07.394971  437710 out.go:360] Setting OutFile to fd 1 ...
	I1210 08:05:07.395185  437710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:05:07.395218  437710 out.go:374] Setting ErrFile to fd 2...
	I1210 08:05:07.395239  437710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:05:07.395879  437710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 08:05:07.396304  437710 out.go:368] Setting JSON to false
	I1210 08:05:07.397159  437710 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10058,"bootTime":1765343850,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 08:05:07.397255  437710 start.go:143] virtualization:  
	I1210 08:05:07.400545  437710 out.go:179] * [functional-314220] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1210 08:05:07.403482  437710 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 08:05:07.403567  437710 notify.go:221] Checking for updates...
	I1210 08:05:07.409228  437710 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 08:05:07.412142  437710 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 08:05:07.415611  437710 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 08:05:07.418405  437710 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 08:05:07.421215  437710 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 08:05:07.424553  437710 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 08:05:07.425232  437710 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 08:05:07.456710  437710 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 08:05:07.456836  437710 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 08:05:07.529584  437710 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 08:05:07.520295348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 08:05:07.529696  437710 docker.go:319] overlay module found
	I1210 08:05:07.532635  437710 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1210 08:05:07.535467  437710 start.go:309] selected driver: docker
	I1210 08:05:07.535486  437710 start.go:927] validating driver "docker" against &{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 08:05:07.535585  437710 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 08:05:07.539094  437710 out.go:203] 
	W1210 08:05:07.541939  437710 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 08:05:07.544735  437710 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.462758438Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=30754360-77fc-41d9-961a-703309105bf4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.463612109Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=ce58a9d0-ec5e-41a7-a162-73ed5f175442 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.464131886Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=af48f6b4-c6c8-458a-8d08-3443ae3e881b name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.464662517Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=4191b9f9-c176-40bc-b3bb-ec0edd3076c8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.465135606Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=eaddf230-cd32-4499-a396-5bbd1b1cb31a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.465587147Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=300cd477-ebce-4fed-8c84-bc9781d52848 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.466022016Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=1367fa60-098e-4704-b6f3-b114a75d5405 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.067588181Z" level=info msg="Checking image status: kicbase/echo-server:functional-314220" id=9cb916ae-b10a-45e5-9f16-e18af053130e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.067769762Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.067811068Z" level=info msg="Image kicbase/echo-server:functional-314220 not found" id=9cb916ae-b10a-45e5-9f16-e18af053130e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.067872754Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-314220 found" id=9cb916ae-b10a-45e5-9f16-e18af053130e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.095996195Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-314220" id=b1a2f38f-ca46-4e03-a345-6e3600d518ba name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.096290073Z" level=info msg="Image docker.io/kicbase/echo-server:functional-314220 not found" id=b1a2f38f-ca46-4e03-a345-6e3600d518ba name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.09635135Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-314220 found" id=b1a2f38f-ca46-4e03-a345-6e3600d518ba name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.132021615Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-314220" id=2edafe31-0e02-4fc2-9f4f-edcaf98b26aa name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.132192218Z" level=info msg="Image localhost/kicbase/echo-server:functional-314220 not found" id=2edafe31-0e02-4fc2-9f4f-edcaf98b26aa name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.13224538Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-314220 found" id=2edafe31-0e02-4fc2-9f4f-edcaf98b26aa name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.091386609Z" level=info msg="Checking image status: kicbase/echo-server:functional-314220" id=48a93da4-5ee9-4739-b4c1-47bc2a67ca0c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.091538619Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.091581639Z" level=info msg="Image kicbase/echo-server:functional-314220 not found" id=48a93da4-5ee9-4739-b4c1-47bc2a67ca0c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.091640035Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-314220 found" id=48a93da4-5ee9-4739-b4c1-47bc2a67ca0c name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.133859584Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-314220" id=f78161c9-d7b6-42a0-b3d9-39f237b54b13 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.133994224Z" level=info msg="Image docker.io/kicbase/echo-server:functional-314220 not found" id=f78161c9-d7b6-42a0-b3d9-39f237b54b13 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.134034315Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-314220 found" id=f78161c9-d7b6-42a0-b3d9-39f237b54b13 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:58 functional-314220 crio[9896]: time="2025-12-10T08:02:58.166199113Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-314220" id=16f24585-97cc-4a0b-a37c-9ad94456e987 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 08:07:05.250623   25404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:07:05.251536   25404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:07:05.253155   25404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:07:05.253482   25404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:07:05.254982   25404 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	[Dec10 07:11] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:23] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:25] overlayfs: idmapped layers are currently not supported
	[  +0.065949] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 07:31] overlayfs: idmapped layers are currently not supported
	[Dec10 07:32] overlayfs: idmapped layers are currently not supported
	[Dec10 07:50] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 08:07:05 up  2:49,  0 user,  load average: 0.24, 0.25, 0.44
	Linux functional-314220 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 08:07:02 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:07:03 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 665.
	Dec 10 08:07:03 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:07:03 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:07:03 functional-314220 kubelet[25276]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:07:03 functional-314220 kubelet[25276]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:07:03 functional-314220 kubelet[25276]: E1210 08:07:03.705437   25276 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:07:03 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:07:03 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:07:04 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 666.
	Dec 10 08:07:04 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:07:04 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:07:04 functional-314220 kubelet[25297]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:07:04 functional-314220 kubelet[25297]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:07:04 functional-314220 kubelet[25297]: E1210 08:07:04.486417   25297 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:07:04 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:07:04 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:07:05 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 667.
	Dec 10 08:07:05 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:07:05 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:07:05 functional-314220 kubelet[25395]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:07:05 functional-314220 kubelet[25395]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:07:05 functional-314220 kubelet[25395]: E1210 08:07:05.226711   25395 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:07:05 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:07:05 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220: exit status 2 (296.527648ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-314220" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (3.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-314220 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-314220 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (80.651495ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-314220 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-314220
helpers_test.go:244: (dbg) docker inspect functional-314220:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	        "Created": "2025-12-10T07:35:53.582567333Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 407506,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T07:35:53.640615938Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hostname",
	        "HostsPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/hosts",
	        "LogPath": "/var/lib/docker/containers/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30/82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30-json.log",
	        "Name": "/functional-314220",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-314220:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-314220",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "82cb4926192d1c59983540c13a67bd797884ed02e5af9f704785d87337cf3a30",
	                "LowerDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35-init/diff:/var/lib/docker/overlay2/888a54fd4518421bb2e30bc2f0825f232bffa2f260991e8ba288270662a6554b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de27e893f3c08c49faf415827faaa183fb140b8507f9671d2f57f406c1f1cb35/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-314220",
	                "Source": "/var/lib/docker/volumes/functional-314220/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-314220",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-314220",
	                "name.minikube.sigs.k8s.io": "functional-314220",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a7384df8731cf8cd48ab1dfb3eb79fa2c44cd426e6255cad7345dab2e19d0bfd",
	            "SandboxKey": "/var/run/docker/netns/a7384df8731c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-314220": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:6b:8c:59:cb:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "854df014d6f98ea92f9938b53e3af35a6df7a455941ebb6e97efa575a4d35230",
	                    "EndpointID": "285d153b9d616bc3d729ab764c8a3d3c5b28bb20787fa60a80cbf50869c0c24b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-314220",
	                        "82cb4926192d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-314220 -n functional-314220: exit status 2 (405.567662ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-314220 logs -n 25: (1.440963609s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ list                                                                                                                                                         │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ ssh     │ functional-314220 ssh sudo crictl images                                                                                                                     │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ ssh     │ functional-314220 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                           │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ ssh     │ functional-314220 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │                     │
	│ cache   │ functional-314220 cache reload                                                                                                                               │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ ssh     │ functional-314220 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │ 10 Dec 25 07:50 UTC │
	│ kubectl │ functional-314220 kubectl -- --context functional-314220 get pods                                                                                            │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │                     │
	│ start   │ -p functional-314220 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 07:50 UTC │                     │
	│ cp      │ functional-314220 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ config  │ functional-314220 config unset cpus                                                                                                                          │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ config  │ functional-314220 config get cpus                                                                                                                            │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │                     │
	│ config  │ functional-314220 config set cpus 2                                                                                                                          │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ config  │ functional-314220 config get cpus                                                                                                                            │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ config  │ functional-314220 config unset cpus                                                                                                                          │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ config  │ functional-314220 config get cpus                                                                                                                            │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │                     │
	│ license │                                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ cp      │ functional-314220 cp functional-314220:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3174646371/001/cp-test.txt │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ ssh     │ functional-314220 ssh sudo systemctl is-active docker                                                                                                        │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │                     │
	│ ssh     │ functional-314220 ssh -n functional-314220 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ ssh     │ functional-314220 ssh sudo systemctl is-active containerd                                                                                                    │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │                     │
	│ cp      │ functional-314220 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ ssh     │ functional-314220 ssh -n functional-314220 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │ 10 Dec 25 08:02 UTC │
	│ image   │ functional-314220 image load --daemon kicbase/echo-server:functional-314220 --alsologtostderr                                                                │ functional-314220 │ jenkins │ v1.37.0 │ 10 Dec 25 08:02 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:50:32
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:50:32.899349  418823 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:50:32.899467  418823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:50:32.899470  418823 out.go:374] Setting ErrFile to fd 2...
	I1210 07:50:32.899475  418823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:50:32.899728  418823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:50:32.900077  418823 out.go:368] Setting JSON to false
	I1210 07:50:32.900875  418823 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9183,"bootTime":1765343850,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:50:32.900927  418823 start.go:143] virtualization:  
	I1210 07:50:32.904391  418823 out.go:179] * [functional-314220] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:50:32.909970  418823 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:50:32.910062  418823 notify.go:221] Checking for updates...
	I1210 07:50:32.913755  418823 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:50:32.917032  418823 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:50:32.919882  418823 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 07:50:32.922630  418823 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:50:32.926514  418823 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:50:32.929831  418823 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:50:32.929952  418823 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:50:32.973254  418823 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:50:32.973375  418823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:50:33.030281  418823 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 07:50:33.020639734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:50:33.030378  418823 docker.go:319] overlay module found
	I1210 07:50:33.033510  418823 out.go:179] * Using the docker driver based on existing profile
	I1210 07:50:33.036367  418823 start.go:309] selected driver: docker
	I1210 07:50:33.036393  418823 start.go:927] validating driver "docker" against &{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:50:33.036475  418823 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:50:33.036573  418823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:50:33.101667  418823 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-10 07:50:33.09179395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:50:33.102098  418823 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 07:50:33.102120  418823 cni.go:84] Creating CNI manager for ""
	I1210 07:50:33.102171  418823 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:50:33.102212  418823 start.go:353] cluster config:
	{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:50:33.107143  418823 out.go:179] * Starting "functional-314220" primary control-plane node in "functional-314220" cluster
	I1210 07:50:33.110125  418823 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 07:50:33.113004  418823 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:50:33.115816  418823 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:50:33.115854  418823 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1210 07:50:33.115862  418823 cache.go:65] Caching tarball of preloaded images
	I1210 07:50:33.115956  418823 preload.go:238] Found /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1210 07:50:33.115966  418823 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 07:50:33.115961  418823 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:50:33.116084  418823 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/config.json ...
	I1210 07:50:33.135517  418823 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 07:50:33.135528  418823 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 07:50:33.135548  418823 cache.go:243] Successfully downloaded all kic artifacts
	I1210 07:50:33.135579  418823 start.go:360] acquireMachinesLock for functional-314220: {Name:mk24b69a1578b6f8eb2fecd1b5e1ddb99787d8b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 07:50:33.135644  418823 start.go:364] duration metric: took 47.935µs to acquireMachinesLock for "functional-314220"
	I1210 07:50:33.135662  418823 start.go:96] Skipping create...Using existing machine configuration
	I1210 07:50:33.135667  418823 fix.go:54] fixHost starting: 
	I1210 07:50:33.135928  418823 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
	I1210 07:50:33.153142  418823 fix.go:112] recreateIfNeeded on functional-314220: state=Running err=<nil>
	W1210 07:50:33.153176  418823 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 07:50:33.156510  418823 out.go:252] * Updating the running docker "functional-314220" container ...
	I1210 07:50:33.156542  418823 machine.go:94] provisionDockerMachine start ...
	I1210 07:50:33.156629  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:33.173363  418823 main.go:143] libmachine: Using SSH client type: native
	I1210 07:50:33.173679  418823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:50:33.173685  418823 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 07:50:33.306701  418823 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-314220
	
	I1210 07:50:33.306715  418823 ubuntu.go:182] provisioning hostname "functional-314220"
	I1210 07:50:33.306784  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:33.323402  418823 main.go:143] libmachine: Using SSH client type: native
	I1210 07:50:33.323703  418823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:50:33.323711  418823 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-314220 && echo "functional-314220" | sudo tee /etc/hostname
	I1210 07:50:33.463802  418823 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-314220
	
	I1210 07:50:33.463873  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:33.481663  418823 main.go:143] libmachine: Using SSH client type: native
	I1210 07:50:33.481979  418823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:50:33.481993  418823 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-314220' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-314220/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-314220' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 07:50:33.615371  418823 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 07:50:33.615387  418823 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-376671/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-376671/.minikube}
	I1210 07:50:33.615415  418823 ubuntu.go:190] setting up certificates
	I1210 07:50:33.615424  418823 provision.go:84] configureAuth start
	I1210 07:50:33.615481  418823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:50:33.633344  418823 provision.go:143] copyHostCerts
	I1210 07:50:33.633409  418823 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem, removing ...
	I1210 07:50:33.633416  418823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem
	I1210 07:50:33.633490  418823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem (1082 bytes)
	I1210 07:50:33.633597  418823 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem, removing ...
	I1210 07:50:33.633601  418823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem
	I1210 07:50:33.633627  418823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem (1123 bytes)
	I1210 07:50:33.633685  418823 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem, removing ...
	I1210 07:50:33.633688  418823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem
	I1210 07:50:33.633710  418823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem (1675 bytes)
	I1210 07:50:33.633815  418823 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem org=jenkins.functional-314220 san=[127.0.0.1 192.168.49.2 functional-314220 localhost minikube]
	I1210 07:50:33.839628  418823 provision.go:177] copyRemoteCerts
	I1210 07:50:33.839683  418823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 07:50:33.839721  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:33.857491  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:33.954662  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 07:50:33.972200  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 07:50:33.989946  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 07:50:34.010600  418823 provision.go:87] duration metric: took 395.152109ms to configureAuth
	I1210 07:50:34.010620  418823 ubuntu.go:206] setting minikube options for container-runtime
	I1210 07:50:34.010837  418823 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 07:50:34.010945  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.031319  418823 main.go:143] libmachine: Using SSH client type: native
	I1210 07:50:34.031635  418823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1210 07:50:34.031646  418823 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 07:50:34.394456  418823 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 07:50:34.394468  418823 machine.go:97] duration metric: took 1.237919377s to provisionDockerMachine
	I1210 07:50:34.394480  418823 start.go:293] postStartSetup for "functional-314220" (driver="docker")
	I1210 07:50:34.394492  418823 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 07:50:34.394553  418823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 07:50:34.394594  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.425725  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:34.527110  418823 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 07:50:34.530555  418823 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 07:50:34.530572  418823 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 07:50:34.530582  418823 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/addons for local assets ...
	I1210 07:50:34.530636  418823 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/files for local assets ...
	I1210 07:50:34.530720  418823 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> 3785282.pem in /etc/ssl/certs
	I1210 07:50:34.530798  418823 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts -> hosts in /etc/test/nested/copy/378528
	I1210 07:50:34.530841  418823 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/378528
	I1210 07:50:34.538245  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 07:50:34.555946  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts --> /etc/test/nested/copy/378528/hosts (40 bytes)
	I1210 07:50:34.573402  418823 start.go:296] duration metric: took 178.908422ms for postStartSetup
	I1210 07:50:34.573478  418823 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 07:50:34.573515  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.591144  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:34.684092  418823 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 07:50:34.688828  418823 fix.go:56] duration metric: took 1.553153828s for fixHost
	I1210 07:50:34.688843  418823 start.go:83] releasing machines lock for "functional-314220", held for 1.553192081s
	I1210 07:50:34.688922  418823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-314220
	I1210 07:50:34.705960  418823 ssh_runner.go:195] Run: cat /version.json
	I1210 07:50:34.705982  418823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 07:50:34.706002  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.706033  418823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
	I1210 07:50:34.724227  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:34.734363  418823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
	I1210 07:50:34.905519  418823 ssh_runner.go:195] Run: systemctl --version
	I1210 07:50:34.911896  418823 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 07:50:34.947949  418823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 07:50:34.952265  418823 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 07:50:34.952348  418823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 07:50:34.960087  418823 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 07:50:34.960100  418823 start.go:496] detecting cgroup driver to use...
	I1210 07:50:34.960131  418823 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 07:50:34.960194  418823 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 07:50:34.975734  418823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 07:50:34.988235  418823 docker.go:218] disabling cri-docker service (if available) ...
	I1210 07:50:34.988306  418823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 07:50:35.008024  418823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 07:50:35.023507  418823 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 07:50:35.140776  418823 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 07:50:35.287143  418823 docker.go:234] disabling docker service ...
	I1210 07:50:35.287205  418823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 07:50:35.302191  418823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 07:50:35.316045  418823 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 07:50:35.435977  418823 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 07:50:35.558581  418823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 07:50:35.570905  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 07:50:35.584271  418823 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 07:50:35.584341  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.593128  418823 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 07:50:35.593191  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.602242  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.611204  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.619936  418823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 07:50:35.627869  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.636843  418823 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.645059  418823 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 07:50:35.653527  418823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 07:50:35.660914  418823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 07:50:35.668098  418823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:50:35.785150  418823 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 07:50:35.938526  418823 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 07:50:35.938594  418823 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 07:50:35.943564  418823 start.go:564] Will wait 60s for crictl version
	I1210 07:50:35.943634  418823 ssh_runner.go:195] Run: which crictl
	I1210 07:50:35.950126  418823 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 07:50:35.976476  418823 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 07:50:35.976565  418823 ssh_runner.go:195] Run: crio --version
	I1210 07:50:36.013250  418823 ssh_runner.go:195] Run: crio --version
	I1210 07:50:36.049514  418823 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 07:50:36.052392  418823 cli_runner.go:164] Run: docker network inspect functional-314220 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 07:50:36.073467  418823 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1210 07:50:36.080871  418823 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1210 07:50:36.083861  418823 kubeadm.go:884] updating cluster {Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 07:50:36.084003  418823 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 07:50:36.084083  418823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:50:36.122033  418823 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:50:36.122045  418823 crio.go:433] Images already preloaded, skipping extraction
	I1210 07:50:36.122104  418823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 07:50:36.147981  418823 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 07:50:36.147994  418823 cache_images.go:86] Images are preloaded, skipping loading
	I1210 07:50:36.148000  418823 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1210 07:50:36.148093  418823 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-314220 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 07:50:36.148179  418823 ssh_runner.go:195] Run: crio config
	I1210 07:50:36.223557  418823 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1210 07:50:36.223582  418823 cni.go:84] Creating CNI manager for ""
	I1210 07:50:36.223591  418823 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:50:36.223605  418823 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 07:50:36.223627  418823 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-314220 NodeName:functional-314220 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 07:50:36.223742  418823 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-314220"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 07:50:36.223809  418823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 07:50:36.231667  418823 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 07:50:36.231750  418823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 07:50:36.239592  418823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1210 07:50:36.252574  418823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 07:50:36.265349  418823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1210 07:50:36.278251  418823 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1210 07:50:36.281864  418823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 07:50:36.395980  418823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 07:50:36.662807  418823 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220 for IP: 192.168.49.2
	I1210 07:50:36.662818  418823 certs.go:195] generating shared ca certs ...
	I1210 07:50:36.662833  418823 certs.go:227] acquiring lock for ca certs: {Name:mk8f0c12407c084bd20b81428ac0dabca9a8cbd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:50:36.662974  418823 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key
	I1210 07:50:36.663036  418823 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key
	I1210 07:50:36.663044  418823 certs.go:257] generating profile certs ...
	I1210 07:50:36.663128  418823 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.key
	I1210 07:50:36.663184  418823 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key.8ae59347
	I1210 07:50:36.663221  418823 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key
	I1210 07:50:36.663326  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem (1338 bytes)
	W1210 07:50:36.663359  418823 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528_empty.pem, impossibly tiny 0 bytes
	I1210 07:50:36.663370  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 07:50:36.663396  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem (1082 bytes)
	I1210 07:50:36.663419  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem (1123 bytes)
	I1210 07:50:36.663444  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem (1675 bytes)
	I1210 07:50:36.663487  418823 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 07:50:36.664085  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 07:50:36.684901  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 07:50:36.704871  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 07:50:36.724001  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 07:50:36.742252  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 07:50:36.759395  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 07:50:36.776213  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 07:50:36.793265  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 07:50:36.810512  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /usr/share/ca-certificates/3785282.pem (1708 bytes)
	I1210 07:50:36.828353  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 07:50:36.845515  418823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem --> /usr/share/ca-certificates/378528.pem (1338 bytes)
	I1210 07:50:36.862765  418823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 07:50:36.875122  418823 ssh_runner.go:195] Run: openssl version
	I1210 07:50:36.881447  418823 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3785282.pem
	I1210 07:50:36.888818  418823 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3785282.pem /etc/ssl/certs/3785282.pem
	I1210 07:50:36.896054  418823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3785282.pem
	I1210 07:50:36.899817  418823 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 07:35 /usr/share/ca-certificates/3785282.pem
	I1210 07:50:36.899876  418823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3785282.pem
	I1210 07:50:36.940839  418823 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 07:50:36.948274  418823 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:50:36.955506  418823 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 07:50:36.963139  418823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:50:36.966818  418823 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 07:25 /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:50:36.966873  418823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 07:50:37.008344  418823 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 07:50:37.018542  418823 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/378528.pem
	I1210 07:50:37.028848  418823 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/378528.pem /etc/ssl/certs/378528.pem
	I1210 07:50:37.037787  418823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378528.pem
	I1210 07:50:37.041789  418823 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 07:35 /usr/share/ca-certificates/378528.pem
	I1210 07:50:37.041883  418823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378528.pem
	I1210 07:50:37.083088  418823 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 07:50:37.090399  418823 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 07:50:37.093984  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 07:50:37.134711  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 07:50:37.175584  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 07:50:37.216322  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 07:50:37.258210  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 07:50:37.300727  418823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 07:50:37.343870  418823 kubeadm.go:401] StartCluster: {Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:50:37.343957  418823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 07:50:37.344031  418823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:50:37.373693  418823 cri.go:89] found id: ""
	I1210 07:50:37.373755  418823 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 07:50:37.382429  418823 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 07:50:37.382439  418823 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 07:50:37.382493  418823 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 07:50:37.389449  418823 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:50:37.389979  418823 kubeconfig.go:125] found "functional-314220" server: "https://192.168.49.2:8441"
	I1210 07:50:37.391548  418823 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 07:50:37.399103  418823 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 07:36:02.271715799 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 07:50:36.273283366 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1210 07:50:37.399128  418823 kubeadm.go:1161] stopping kube-system containers ...
	I1210 07:50:37.399140  418823 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 07:50:37.399196  418823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 07:50:37.434614  418823 cri.go:89] found id: ""
	I1210 07:50:37.434674  418823 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 07:50:37.455844  418823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:50:37.463706  418823 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 10 07:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Dec 10 07:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 10 07:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Dec 10 07:40 /etc/kubernetes/scheduler.conf
	
	I1210 07:50:37.463780  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 07:50:37.471472  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 07:50:37.478782  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:50:37.478837  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:50:37.486355  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 07:50:37.493976  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:50:37.494040  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:50:37.501640  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 07:50:37.509588  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 07:50:37.509645  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:50:37.517276  418823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:50:37.525049  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:37.571686  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:39.573879  418823 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.002165526s)
	I1210 07:50:39.573940  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:39.780126  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:39.857417  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 07:50:39.903067  418823 api_server.go:52] waiting for apiserver process to appear ...
	I1210 07:50:39.903139  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:40.403973  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:40.903804  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:41.403355  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:41.904207  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:42.404057  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:42.903818  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:43.404234  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:43.904036  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:44.403331  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:44.904250  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:45.404168  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:45.904093  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:46.404204  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:46.904144  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:47.404213  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:47.903250  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:48.404144  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:48.904262  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:49.404011  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:49.903321  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:50.403990  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:50.903998  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:51.403914  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:51.903990  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:52.403942  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:52.903796  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:53.403576  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:53.903966  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:54.403314  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:54.904147  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:55.404245  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:55.903953  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:56.403340  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:56.904274  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:57.404124  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:57.903801  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:58.404241  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:58.903869  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:59.403287  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:50:59.903954  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:00.403352  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:00.904043  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:01.403894  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:01.903648  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:02.404219  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:02.903678  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:03.403948  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:03.904224  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:04.403353  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:04.904036  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:05.404217  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:05.903272  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:06.404216  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:06.903390  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:07.403379  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:07.903804  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:08.404215  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:08.904228  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:09.404143  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:09.904284  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:10.403331  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:10.904097  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:11.404225  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:11.903848  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:12.403282  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:12.903360  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:13.403955  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:13.903329  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:14.404081  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:14.903215  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:15.403223  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:15.903728  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:16.403337  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:16.904035  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:17.403389  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:17.904062  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:18.403915  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:18.903844  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:19.403287  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:19.903456  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:20.403269  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:20.903919  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:21.403294  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:21.903959  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:22.403330  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:22.903425  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:23.403210  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:23.904289  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:24.403468  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:24.903578  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:25.403340  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:25.903276  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:26.404241  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:26.903945  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:27.404152  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:27.903337  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:28.404037  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:28.903401  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:29.403321  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:29.904015  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:30.403353  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:30.904231  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.403897  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:31.903428  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:32.404285  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:32.904059  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:33.403419  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:33.903340  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:34.404109  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:34.903323  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:35.404151  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:35.903331  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.403229  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:36.904295  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:37.403231  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:37.904159  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:38.403982  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:38.903898  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:39.403315  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:39.903344  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:39.903423  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:39.933715  418823 cri.go:89] found id: ""
	I1210 07:51:39.933730  418823 logs.go:282] 0 containers: []
	W1210 07:51:39.933737  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:39.933741  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:39.933807  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:39.959343  418823 cri.go:89] found id: ""
	I1210 07:51:39.959358  418823 logs.go:282] 0 containers: []
	W1210 07:51:39.959366  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:39.959371  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:39.959428  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:39.985280  418823 cri.go:89] found id: ""
	I1210 07:51:39.985294  418823 logs.go:282] 0 containers: []
	W1210 07:51:39.985302  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:39.985307  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:39.985366  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:40.021888  418823 cri.go:89] found id: ""
	I1210 07:51:40.021904  418823 logs.go:282] 0 containers: []
	W1210 07:51:40.021912  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:40.021917  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:40.022019  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:40.050222  418823 cri.go:89] found id: ""
	I1210 07:51:40.050238  418823 logs.go:282] 0 containers: []
	W1210 07:51:40.050245  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:40.050251  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:40.050314  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:40.076513  418823 cri.go:89] found id: ""
	I1210 07:51:40.076528  418823 logs.go:282] 0 containers: []
	W1210 07:51:40.076536  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:40.076541  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:40.076603  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:40.106190  418823 cri.go:89] found id: ""
	I1210 07:51:40.106206  418823 logs.go:282] 0 containers: []
	W1210 07:51:40.106213  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:40.106221  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:40.106232  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:40.171760  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:40.171781  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:40.188577  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:40.188594  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:40.259869  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:40.250864   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.251869   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.253556   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.254191   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.255824   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:40.250864   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.251869   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.253556   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.254191   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:40.255824   10976 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:40.259893  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:40.259905  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:40.330751  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:40.330772  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:42.864666  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:42.875209  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:42.875278  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:42.906775  418823 cri.go:89] found id: ""
	I1210 07:51:42.906788  418823 logs.go:282] 0 containers: []
	W1210 07:51:42.906796  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:42.906802  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:42.906860  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:42.932120  418823 cri.go:89] found id: ""
	I1210 07:51:42.932134  418823 logs.go:282] 0 containers: []
	W1210 07:51:42.932142  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:42.932147  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:42.932207  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:42.960769  418823 cri.go:89] found id: ""
	I1210 07:51:42.960784  418823 logs.go:282] 0 containers: []
	W1210 07:51:42.960793  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:42.960798  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:42.960857  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:42.986269  418823 cri.go:89] found id: ""
	I1210 07:51:42.986285  418823 logs.go:282] 0 containers: []
	W1210 07:51:42.986294  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:42.986299  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:42.986361  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:43.021139  418823 cri.go:89] found id: ""
	I1210 07:51:43.021155  418823 logs.go:282] 0 containers: []
	W1210 07:51:43.021163  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:43.021168  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:43.021241  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:43.047486  418823 cri.go:89] found id: ""
	I1210 07:51:43.047501  418823 logs.go:282] 0 containers: []
	W1210 07:51:43.047508  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:43.047513  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:43.047576  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:43.073233  418823 cri.go:89] found id: ""
	I1210 07:51:43.073247  418823 logs.go:282] 0 containers: []
	W1210 07:51:43.073255  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:43.073263  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:43.073273  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:43.139078  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:43.139105  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:43.153579  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:43.153595  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:43.240938  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:43.232595   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.233028   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.234681   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.235233   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.236893   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:43.232595   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.233028   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.234681   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.235233   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:43.236893   11080 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:43.240958  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:43.240970  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:43.308772  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:43.308794  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:45.841619  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:45.852276  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:45.852345  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:45.887199  418823 cri.go:89] found id: ""
	I1210 07:51:45.887215  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.887222  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:45.887237  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:45.887324  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:45.918859  418823 cri.go:89] found id: ""
	I1210 07:51:45.918873  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.918880  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:45.918885  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:45.918944  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:45.943991  418823 cri.go:89] found id: ""
	I1210 07:51:45.944006  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.944014  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:45.944019  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:45.944088  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:45.970351  418823 cri.go:89] found id: ""
	I1210 07:51:45.970371  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.970379  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:45.970384  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:45.970444  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:45.995587  418823 cri.go:89] found id: ""
	I1210 07:51:45.995601  418823 logs.go:282] 0 containers: []
	W1210 07:51:45.995609  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:45.995614  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:45.995678  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:46.023570  418823 cri.go:89] found id: ""
	I1210 07:51:46.023586  418823 logs.go:282] 0 containers: []
	W1210 07:51:46.023593  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:46.023599  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:46.023660  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:46.056294  418823 cri.go:89] found id: ""
	I1210 07:51:46.056309  418823 logs.go:282] 0 containers: []
	W1210 07:51:46.056317  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:46.056325  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:46.056336  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:46.125021  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:46.125041  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:46.139709  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:46.139728  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:46.233096  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:46.221808   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.223333   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.227401   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.227730   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.229200   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:46.221808   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.223333   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.227401   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.227730   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:46.229200   11183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:46.233116  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:46.233127  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:46.302440  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:46.302460  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:48.833091  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:48.843740  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:48.843804  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:48.869041  418823 cri.go:89] found id: ""
	I1210 07:51:48.869057  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.869064  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:48.869070  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:48.869139  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:48.893750  418823 cri.go:89] found id: ""
	I1210 07:51:48.893765  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.893784  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:48.893790  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:48.893850  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:48.919315  418823 cri.go:89] found id: ""
	I1210 07:51:48.919330  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.919337  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:48.919343  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:48.919413  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:48.944091  418823 cri.go:89] found id: ""
	I1210 07:51:48.944107  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.944114  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:48.944120  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:48.944178  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:48.968980  418823 cri.go:89] found id: ""
	I1210 07:51:48.968995  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.969002  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:48.969007  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:48.969066  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:48.994258  418823 cri.go:89] found id: ""
	I1210 07:51:48.994272  418823 logs.go:282] 0 containers: []
	W1210 07:51:48.994279  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:48.994294  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:48.994354  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:49.021988  418823 cri.go:89] found id: ""
	I1210 07:51:49.022004  418823 logs.go:282] 0 containers: []
	W1210 07:51:49.022012  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:49.022019  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:49.022029  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:49.089579  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:49.089605  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:49.118629  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:49.118648  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:49.191180  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:49.191204  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:49.208309  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:49.208325  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:49.273461  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:49.264556   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.265266   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.266981   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.267634   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.269070   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:49.264556   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.265266   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.266981   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.267634   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:49.269070   11307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:51.775168  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:51.785506  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:51.785567  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:51.810828  418823 cri.go:89] found id: ""
	I1210 07:51:51.810843  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.810860  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:51.810865  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:51.810926  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:51.835270  418823 cri.go:89] found id: ""
	I1210 07:51:51.835285  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.835292  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:51.835297  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:51.835357  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:51.862106  418823 cri.go:89] found id: ""
	I1210 07:51:51.862121  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.862129  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:51.862134  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:51.862203  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:51.887726  418823 cri.go:89] found id: ""
	I1210 07:51:51.887741  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.887749  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:51.887754  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:51.887816  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:51.916383  418823 cri.go:89] found id: ""
	I1210 07:51:51.916398  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.916405  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:51.916409  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:51.916479  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:51.945251  418823 cri.go:89] found id: ""
	I1210 07:51:51.945266  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.945273  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:51.945278  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:51.945337  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:51.970333  418823 cri.go:89] found id: ""
	I1210 07:51:51.970348  418823 logs.go:282] 0 containers: []
	W1210 07:51:51.970357  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:51.970365  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:51.970385  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:51.998969  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:51.998986  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:52.071390  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:52.071420  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:52.087389  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:52.087406  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:52.154961  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:52.145998   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.146914   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.148561   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.149089   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.150792   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:52.145998   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.146914   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.148561   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.149089   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:52.150792   11403 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:52.154973  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:52.154985  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:54.734714  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:54.745090  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:54.745151  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:54.770064  418823 cri.go:89] found id: ""
	I1210 07:51:54.770079  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.770086  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:54.770091  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:54.770149  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:54.796152  418823 cri.go:89] found id: ""
	I1210 07:51:54.796167  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.796174  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:54.796179  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:54.796241  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:54.822080  418823 cri.go:89] found id: ""
	I1210 07:51:54.822095  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.822102  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:54.822107  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:54.822175  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:54.849868  418823 cri.go:89] found id: ""
	I1210 07:51:54.849883  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.849891  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:54.849895  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:54.849951  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:54.875726  418823 cri.go:89] found id: ""
	I1210 07:51:54.875741  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.875748  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:54.875753  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:54.875815  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:54.905509  418823 cri.go:89] found id: ""
	I1210 07:51:54.905524  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.905531  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:54.905536  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:54.905595  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:54.931115  418823 cri.go:89] found id: ""
	I1210 07:51:54.931138  418823 logs.go:282] 0 containers: []
	W1210 07:51:54.931146  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:54.931154  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:54.931164  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:54.997885  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:54.997906  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:51:55.030067  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:55.030094  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:55.099098  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:55.099116  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:55.113912  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:55.113934  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:55.200955  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:55.191641   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.192478   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.194192   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.195162   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.196812   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:55.191641   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.192478   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.194192   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.195162   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:55.196812   11511 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:57.701770  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:51:57.712296  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:51:57.712359  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:51:57.742200  418823 cri.go:89] found id: ""
	I1210 07:51:57.742217  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.742225  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:51:57.742230  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:51:57.742288  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:51:57.770042  418823 cri.go:89] found id: ""
	I1210 07:51:57.770056  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.770063  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:51:57.770068  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:51:57.770126  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:51:57.795451  418823 cri.go:89] found id: ""
	I1210 07:51:57.795464  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.795471  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:51:57.795477  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:51:57.795536  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:51:57.823068  418823 cri.go:89] found id: ""
	I1210 07:51:57.823084  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.823091  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:51:57.823097  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:51:57.823160  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:51:57.849968  418823 cri.go:89] found id: ""
	I1210 07:51:57.849982  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.849998  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:51:57.850003  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:51:57.850064  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:51:57.877868  418823 cri.go:89] found id: ""
	I1210 07:51:57.877881  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.877889  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:51:57.877894  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:51:57.877954  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:51:57.903803  418823 cri.go:89] found id: ""
	I1210 07:51:57.903823  418823 logs.go:282] 0 containers: []
	W1210 07:51:57.903830  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:51:57.903838  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:51:57.903849  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:51:57.970812  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:51:57.970831  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:51:57.985765  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:51:57.985786  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:51:58.070052  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:51:58.061598   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.062333   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.063943   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.064517   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.066053   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:51:58.061598   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.062333   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.063943   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.064517   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:51:58.066053   11606 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:51:58.070062  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:51:58.070076  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:51:58.138971  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:51:58.138993  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:00.678904  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:00.689904  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:00.689965  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:00.717867  418823 cri.go:89] found id: ""
	I1210 07:52:00.717882  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.717889  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:00.717895  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:00.717960  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:00.746728  418823 cri.go:89] found id: ""
	I1210 07:52:00.746743  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.746750  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:00.746755  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:00.746815  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:00.771995  418823 cri.go:89] found id: ""
	I1210 07:52:00.772009  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.772016  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:00.772021  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:00.772084  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:00.801311  418823 cri.go:89] found id: ""
	I1210 07:52:00.801326  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.801333  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:00.801338  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:00.801400  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:00.827977  418823 cri.go:89] found id: ""
	I1210 07:52:00.827992  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.827999  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:00.828004  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:00.828064  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:00.857640  418823 cri.go:89] found id: ""
	I1210 07:52:00.857653  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.857661  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:00.857666  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:00.857723  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:00.886162  418823 cri.go:89] found id: ""
	I1210 07:52:00.886176  418823 logs.go:282] 0 containers: []
	W1210 07:52:00.886183  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:00.886192  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:00.886203  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:00.900682  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:00.900699  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:00.962996  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:00.954850   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.955457   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.957002   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.957542   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.959068   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:00.954850   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.955457   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.957002   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.957542   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:00.959068   11710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:00.963006  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:00.963044  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:01.030923  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:01.030945  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:01.064661  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:01.064678  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:03.634114  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:03.644373  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:03.644437  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:03.670228  418823 cri.go:89] found id: ""
	I1210 07:52:03.670242  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.670250  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:03.670255  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:03.670313  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:03.697715  418823 cri.go:89] found id: ""
	I1210 07:52:03.697730  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.697737  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:03.697742  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:03.697800  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:03.725317  418823 cri.go:89] found id: ""
	I1210 07:52:03.725331  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.725338  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:03.725344  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:03.725406  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:03.754932  418823 cri.go:89] found id: ""
	I1210 07:52:03.754947  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.754954  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:03.754959  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:03.755055  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:03.781710  418823 cri.go:89] found id: ""
	I1210 07:52:03.781724  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.781731  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:03.781736  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:03.781799  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:03.806748  418823 cri.go:89] found id: ""
	I1210 07:52:03.806761  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.806769  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:03.806773  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:03.806839  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:03.831941  418823 cri.go:89] found id: ""
	I1210 07:52:03.831956  418823 logs.go:282] 0 containers: []
	W1210 07:52:03.831963  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:03.831970  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:03.831980  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:03.893889  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:03.885770   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.886207   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.887763   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.888077   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.889552   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:03.885770   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.886207   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.887763   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.888077   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:03.889552   11811 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:03.893899  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:03.893910  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:03.963740  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:03.963762  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:03.994617  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:03.994633  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:04.064848  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:04.064869  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:06.580763  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:06.590814  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:06.590876  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:06.617862  418823 cri.go:89] found id: ""
	I1210 07:52:06.617877  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.617884  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:06.617889  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:06.617952  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:06.642344  418823 cri.go:89] found id: ""
	I1210 07:52:06.642364  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.642372  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:06.642376  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:06.642434  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:06.668168  418823 cri.go:89] found id: ""
	I1210 07:52:06.668181  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.668189  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:06.668194  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:06.668252  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:06.693569  418823 cri.go:89] found id: ""
	I1210 07:52:06.693584  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.693591  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:06.693596  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:06.693655  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:06.719248  418823 cri.go:89] found id: ""
	I1210 07:52:06.719272  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.719281  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:06.719286  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:06.719353  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:06.744269  418823 cri.go:89] found id: ""
	I1210 07:52:06.744298  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.744306  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:06.744311  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:06.744384  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:06.769456  418823 cri.go:89] found id: ""
	I1210 07:52:06.769485  418823 logs.go:282] 0 containers: []
	W1210 07:52:06.769493  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:06.769501  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:06.769520  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:06.835122  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:06.826477   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.827191   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.828919   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.829490   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.831268   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:06.826477   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.827191   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.828919   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.829490   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:06.831268   11916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:06.835134  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:06.835145  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:06.903874  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:06.903896  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:06.932245  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:06.932261  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:06.999686  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:06.999707  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:09.516631  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:09.527151  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:09.527214  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:09.553162  418823 cri.go:89] found id: ""
	I1210 07:52:09.553175  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.553182  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:09.553187  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:09.553248  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:09.577770  418823 cri.go:89] found id: ""
	I1210 07:52:09.577785  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.577792  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:09.577797  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:09.577857  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:09.603741  418823 cri.go:89] found id: ""
	I1210 07:52:09.603755  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.603765  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:09.603770  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:09.603830  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:09.631507  418823 cri.go:89] found id: ""
	I1210 07:52:09.631521  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.631529  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:09.631534  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:09.631597  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:09.657315  418823 cri.go:89] found id: ""
	I1210 07:52:09.657329  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.657342  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:09.657347  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:09.657406  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:09.682591  418823 cri.go:89] found id: ""
	I1210 07:52:09.682606  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.682613  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:09.682619  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:09.682677  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:09.708020  418823 cri.go:89] found id: ""
	I1210 07:52:09.708034  418823 logs.go:282] 0 containers: []
	W1210 07:52:09.708042  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:09.708049  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:09.708062  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:09.777964  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:09.777985  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:09.792349  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:09.792367  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:09.854411  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:09.846045   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.846788   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.848379   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.848685   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.850192   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:09.846045   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.846788   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.848379   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.848685   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:09.850192   12026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:09.854421  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:09.854434  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:09.922233  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:09.922255  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:12.457145  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:12.468643  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:12.468721  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:12.494760  418823 cri.go:89] found id: ""
	I1210 07:52:12.494774  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.494782  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:12.494787  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:12.494853  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:12.520639  418823 cri.go:89] found id: ""
	I1210 07:52:12.520653  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.520673  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:12.520678  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:12.520738  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:12.546812  418823 cri.go:89] found id: ""
	I1210 07:52:12.546827  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.546834  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:12.546839  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:12.546899  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:12.573531  418823 cri.go:89] found id: ""
	I1210 07:52:12.573546  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.573553  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:12.573558  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:12.573623  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:12.600389  418823 cri.go:89] found id: ""
	I1210 07:52:12.600403  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.600411  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:12.600416  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:12.600475  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:12.630232  418823 cri.go:89] found id: ""
	I1210 07:52:12.630257  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.630265  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:12.630271  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:12.630340  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:12.656013  418823 cri.go:89] found id: ""
	I1210 07:52:12.656027  418823 logs.go:282] 0 containers: []
	W1210 07:52:12.656035  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:12.656042  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:12.656058  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:12.727638  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:12.727667  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:12.742877  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:12.742895  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:12.807790  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:12.799348   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.800103   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.801856   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.802374   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.803909   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:12.799348   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.800103   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.801856   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.802374   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:12.803909   12127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:12.807802  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:12.807814  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:12.876103  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:12.876124  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:15.409499  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:15.424003  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:15.424080  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:15.458307  418823 cri.go:89] found id: ""
	I1210 07:52:15.458341  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.458348  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:15.458353  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:15.458428  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:15.488619  418823 cri.go:89] found id: ""
	I1210 07:52:15.488634  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.488641  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:15.488646  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:15.488709  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:15.513795  418823 cri.go:89] found id: ""
	I1210 07:52:15.513809  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.513817  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:15.513831  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:15.513888  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:15.539219  418823 cri.go:89] found id: ""
	I1210 07:52:15.539233  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.539240  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:15.539245  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:15.539305  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:15.565461  418823 cri.go:89] found id: ""
	I1210 07:52:15.565475  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.565490  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:15.565495  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:15.565554  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:15.597327  418823 cri.go:89] found id: ""
	I1210 07:52:15.597341  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.597348  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:15.597354  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:15.597412  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:15.622974  418823 cri.go:89] found id: ""
	I1210 07:52:15.622994  418823 logs.go:282] 0 containers: []
	W1210 07:52:15.623001  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:15.623047  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:15.623059  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:15.690204  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:15.681087   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.681887   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.683667   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.684195   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.685805   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:15.681087   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.681887   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.683667   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.684195   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:15.685805   12227 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:15.690215  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:15.690226  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:15.758230  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:15.758252  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:15.788867  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:15.788884  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:15.856134  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:15.856154  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:18.371925  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:18.382408  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:18.382482  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:18.408893  418823 cri.go:89] found id: ""
	I1210 07:52:18.408907  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.408914  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:18.408919  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:18.408994  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:18.444341  418823 cri.go:89] found id: ""
	I1210 07:52:18.444355  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.444374  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:18.444380  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:18.444450  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:18.476809  418823 cri.go:89] found id: ""
	I1210 07:52:18.476823  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.476830  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:18.476835  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:18.476892  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:18.503052  418823 cri.go:89] found id: ""
	I1210 07:52:18.503066  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.503073  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:18.503078  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:18.503150  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:18.529967  418823 cri.go:89] found id: ""
	I1210 07:52:18.529981  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.529998  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:18.530003  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:18.530095  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:18.555604  418823 cri.go:89] found id: ""
	I1210 07:52:18.555619  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.555626  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:18.555631  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:18.555692  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:18.580758  418823 cri.go:89] found id: ""
	I1210 07:52:18.580773  418823 logs.go:282] 0 containers: []
	W1210 07:52:18.580781  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:18.580789  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:18.580803  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:18.649536  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:18.641471   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.642112   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.643861   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.644377   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.645509   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:18.641471   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.642112   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.643861   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.644377   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:18.645509   12336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:18.649546  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:18.649558  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:18.720152  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:18.720174  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:18.749804  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:18.749823  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:18.819943  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:18.819965  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:21.337138  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:21.347127  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:21.347189  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:21.373895  418823 cri.go:89] found id: ""
	I1210 07:52:21.373918  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.373926  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:21.373931  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:21.373998  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:21.399869  418823 cri.go:89] found id: ""
	I1210 07:52:21.399896  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.399903  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:21.399908  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:21.399979  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:21.427202  418823 cri.go:89] found id: ""
	I1210 07:52:21.427219  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.427226  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:21.427231  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:21.427299  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:21.458325  418823 cri.go:89] found id: ""
	I1210 07:52:21.458348  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.458355  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:21.458360  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:21.458429  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:21.488232  418823 cri.go:89] found id: ""
	I1210 07:52:21.488246  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.488253  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:21.488259  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:21.488318  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:21.523678  418823 cri.go:89] found id: ""
	I1210 07:52:21.523693  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.523700  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:21.523706  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:21.523774  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:21.554053  418823 cri.go:89] found id: ""
	I1210 07:52:21.554068  418823 logs.go:282] 0 containers: []
	W1210 07:52:21.554076  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:21.554084  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:21.554094  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:21.584626  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:21.584643  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:21.650495  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:21.650516  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:21.665376  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:21.665393  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:21.728186  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:21.719729   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.720322   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.722158   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.722717   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.724385   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:21.719729   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.720322   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.722158   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.722717   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:21.724385   12456 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:21.728197  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:21.728210  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:24.296826  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:24.306876  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:24.306941  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:24.331566  418823 cri.go:89] found id: ""
	I1210 07:52:24.331580  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.331587  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:24.331592  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:24.331654  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:24.364290  418823 cri.go:89] found id: ""
	I1210 07:52:24.364304  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.364312  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:24.364317  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:24.364375  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:24.394840  418823 cri.go:89] found id: ""
	I1210 07:52:24.394855  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.394863  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:24.394871  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:24.394927  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:24.423155  418823 cri.go:89] found id: ""
	I1210 07:52:24.423169  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.423176  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:24.423181  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:24.423237  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:24.448495  418823 cri.go:89] found id: ""
	I1210 07:52:24.448509  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.448517  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:24.448522  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:24.448582  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:24.473213  418823 cri.go:89] found id: ""
	I1210 07:52:24.473228  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.473244  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:24.473250  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:24.473311  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:24.498332  418823 cri.go:89] found id: ""
	I1210 07:52:24.498346  418823 logs.go:282] 0 containers: []
	W1210 07:52:24.498363  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:24.498371  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:24.498386  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:24.512582  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:24.512599  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:24.576630  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:24.568982   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.569512   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.570996   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.571433   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.572843   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:24.568982   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.569512   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.570996   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.571433   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:24.572843   12547 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:24.576640  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:24.576651  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:24.643309  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:24.643329  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:24.671954  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:24.671973  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:27.241302  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:27.251489  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:27.251554  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:27.276224  418823 cri.go:89] found id: ""
	I1210 07:52:27.276239  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.276247  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:27.276252  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:27.276315  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:27.302841  418823 cri.go:89] found id: ""
	I1210 07:52:27.302855  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.302862  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:27.302867  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:27.302934  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:27.329134  418823 cri.go:89] found id: ""
	I1210 07:52:27.329148  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.329155  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:27.329160  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:27.329217  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:27.355218  418823 cri.go:89] found id: ""
	I1210 07:52:27.355233  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.355240  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:27.355245  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:27.355310  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:27.380928  418823 cri.go:89] found id: ""
	I1210 07:52:27.380942  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.380948  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:27.380953  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:27.381016  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:27.405139  418823 cri.go:89] found id: ""
	I1210 07:52:27.405153  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.405160  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:27.405165  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:27.405224  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:27.434261  418823 cri.go:89] found id: ""
	I1210 07:52:27.434274  418823 logs.go:282] 0 containers: []
	W1210 07:52:27.434281  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:27.434288  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:27.434308  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:27.512344  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:27.512364  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:27.526600  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:27.526616  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:27.593338  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:27.585550   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.586220   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.587805   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.588181   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.589629   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:27.585550   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.586220   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.587805   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.588181   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:27.589629   12654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:27.593348  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:27.593360  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:27.660306  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:27.660330  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:30.190245  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:30.200692  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:30.200762  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:30.225476  418823 cri.go:89] found id: ""
	I1210 07:52:30.225491  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.225498  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:30.225503  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:30.225561  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:30.252256  418823 cri.go:89] found id: ""
	I1210 07:52:30.252270  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.252277  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:30.252282  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:30.252339  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:30.277929  418823 cri.go:89] found id: ""
	I1210 07:52:30.277943  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.277950  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:30.277955  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:30.278013  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:30.303604  418823 cri.go:89] found id: ""
	I1210 07:52:30.303619  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.303627  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:30.303631  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:30.303695  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:30.328592  418823 cri.go:89] found id: ""
	I1210 07:52:30.328606  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.328620  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:30.328625  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:30.328683  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:30.357680  418823 cri.go:89] found id: ""
	I1210 07:52:30.357694  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.357701  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:30.357706  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:30.357772  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:30.383058  418823 cri.go:89] found id: ""
	I1210 07:52:30.383071  418823 logs.go:282] 0 containers: []
	W1210 07:52:30.383085  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:30.383093  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:30.383103  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:30.451001  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:30.451264  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:30.466690  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:30.466709  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:30.535653  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:30.527209   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.528061   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.529830   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.530197   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.531734   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:30.527209   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.528061   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.529830   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.530197   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:30.531734   12760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:30.535662  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:30.535673  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:30.603957  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:30.603978  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:33.138030  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:33.148615  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:33.148680  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:33.174834  418823 cri.go:89] found id: ""
	I1210 07:52:33.174848  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.174855  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:33.174860  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:33.174922  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:33.205206  418823 cri.go:89] found id: ""
	I1210 07:52:33.205221  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.205228  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:33.205233  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:33.205296  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:33.235457  418823 cri.go:89] found id: ""
	I1210 07:52:33.235472  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.235480  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:33.235485  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:33.235548  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:33.260204  418823 cri.go:89] found id: ""
	I1210 07:52:33.260218  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.260225  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:33.260230  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:33.260290  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:33.285426  418823 cri.go:89] found id: ""
	I1210 07:52:33.285440  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.285448  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:33.285453  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:33.285513  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:33.310040  418823 cri.go:89] found id: ""
	I1210 07:52:33.310054  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.310068  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:33.310073  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:33.310135  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:33.334636  418823 cri.go:89] found id: ""
	I1210 07:52:33.334650  418823 logs.go:282] 0 containers: []
	W1210 07:52:33.334658  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:33.334665  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:33.334676  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:33.400914  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:33.392977   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.393712   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.395357   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.395749   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.397284   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:33.392977   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.393712   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.395357   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.395749   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:33.397284   12853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:33.400923  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:33.400934  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:33.489102  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:33.489132  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:33.523301  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:33.523319  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:33.590429  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:33.590450  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:36.107174  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:36.117293  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:36.117353  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:36.141455  418823 cri.go:89] found id: ""
	I1210 07:52:36.141469  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.141477  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:36.141482  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:36.141541  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:36.172812  418823 cri.go:89] found id: ""
	I1210 07:52:36.172826  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.172833  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:36.172838  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:36.172901  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:36.201760  418823 cri.go:89] found id: ""
	I1210 07:52:36.201774  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.201781  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:36.201786  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:36.201845  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:36.227525  418823 cri.go:89] found id: ""
	I1210 07:52:36.227539  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.227553  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:36.227558  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:36.227617  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:36.255643  418823 cri.go:89] found id: ""
	I1210 07:52:36.255657  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.255664  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:36.255669  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:36.255729  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:36.281030  418823 cri.go:89] found id: ""
	I1210 07:52:36.281044  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.281052  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:36.281057  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:36.281115  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:36.307190  418823 cri.go:89] found id: ""
	I1210 07:52:36.307204  418823 logs.go:282] 0 containers: []
	W1210 07:52:36.307211  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:36.307219  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:36.307231  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:36.321687  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:36.321705  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:36.383640  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:36.374708   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.375130   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.376841   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.377402   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.379213   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:36.374708   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.375130   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.376841   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.377402   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:36.379213   12962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:36.383650  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:36.383672  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:36.452123  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:36.452142  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:36.485724  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:36.485743  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:39.051733  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:39.062052  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:39.062152  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:39.086707  418823 cri.go:89] found id: ""
	I1210 07:52:39.086722  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.086729  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:39.086734  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:39.086793  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:39.111720  418823 cri.go:89] found id: ""
	I1210 07:52:39.111734  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.111742  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:39.111747  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:39.111807  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:39.135349  418823 cri.go:89] found id: ""
	I1210 07:52:39.135364  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.135371  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:39.135376  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:39.135435  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:39.160834  418823 cri.go:89] found id: ""
	I1210 07:52:39.160857  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.160865  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:39.160871  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:39.160938  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:39.189613  418823 cri.go:89] found id: ""
	I1210 07:52:39.189626  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.189634  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:39.189639  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:39.189696  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:39.214373  418823 cri.go:89] found id: ""
	I1210 07:52:39.214387  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.214394  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:39.214400  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:39.214457  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:39.239814  418823 cri.go:89] found id: ""
	I1210 07:52:39.239829  418823 logs.go:282] 0 containers: []
	W1210 07:52:39.239837  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:39.239845  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:39.239856  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:39.304237  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:39.304257  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:39.320565  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:39.320583  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:39.389276  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:39.381128   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.381864   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.383397   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.383829   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.385337   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:39.381128   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.381864   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.383397   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.383829   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:39.385337   13072 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:39.389286  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:39.389297  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:39.466908  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:39.466930  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:42.005528  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:42.023294  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:42.023367  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:42.058874  418823 cri.go:89] found id: ""
	I1210 07:52:42.058903  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.058911  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:42.058932  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:42.059040  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:42.089784  418823 cri.go:89] found id: ""
	I1210 07:52:42.089801  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.089809  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:42.089814  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:42.089881  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:42.121634  418823 cri.go:89] found id: ""
	I1210 07:52:42.121650  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.121658  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:42.121663  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:42.121737  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:42.153538  418823 cri.go:89] found id: ""
	I1210 07:52:42.153555  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.153563  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:42.153569  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:42.153644  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:42.183586  418823 cri.go:89] found id: ""
	I1210 07:52:42.183603  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.183611  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:42.183619  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:42.183688  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:42.213049  418823 cri.go:89] found id: ""
	I1210 07:52:42.213067  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.213078  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:42.213084  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:42.213165  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:42.242211  418823 cri.go:89] found id: ""
	I1210 07:52:42.242229  418823 logs.go:282] 0 containers: []
	W1210 07:52:42.242241  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:42.242250  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:42.242268  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:42.258546  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:42.258571  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:42.332221  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:42.323428   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.324132   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.325892   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.326478   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.328088   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:42.323428   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.324132   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.325892   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.326478   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:42.328088   13177 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:42.332230  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:42.332241  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:42.398832  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:42.398851  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:42.439292  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:42.439308  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:45.012889  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:45.052510  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:45.052580  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:45.096465  418823 cri.go:89] found id: ""
	I1210 07:52:45.096488  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.096496  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:45.096501  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:45.096574  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:45.131426  418823 cri.go:89] found id: ""
	I1210 07:52:45.131442  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.131450  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:45.131456  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:45.131530  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:45.179314  418823 cri.go:89] found id: ""
	I1210 07:52:45.179331  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.179340  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:45.179345  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:45.179416  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:45.224508  418823 cri.go:89] found id: ""
	I1210 07:52:45.224525  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.224534  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:45.224540  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:45.224616  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:45.259822  418823 cri.go:89] found id: ""
	I1210 07:52:45.259850  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.259859  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:45.259870  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:45.259980  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:45.289141  418823 cri.go:89] found id: ""
	I1210 07:52:45.289157  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.289164  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:45.289170  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:45.289256  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:45.317720  418823 cri.go:89] found id: ""
	I1210 07:52:45.317749  418823 logs.go:282] 0 containers: []
	W1210 07:52:45.317764  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:45.317796  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:45.317831  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:45.385230  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:45.375821   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.376581   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.378081   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.378688   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.380382   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:45.375821   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.376581   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.378081   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.378688   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:45.380382   13276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:45.385240  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:45.385251  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:45.456646  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:45.456667  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:45.489700  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:45.489717  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:45.554187  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:45.554206  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:48.069065  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:48.079822  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:48.079950  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:48.110229  418823 cri.go:89] found id: ""
	I1210 07:52:48.110244  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.110251  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:48.110256  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:48.110317  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:48.138842  418823 cri.go:89] found id: ""
	I1210 07:52:48.138856  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.138864  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:48.138869  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:48.138928  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:48.164708  418823 cri.go:89] found id: ""
	I1210 07:52:48.164722  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.164730  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:48.164735  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:48.164793  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:48.190030  418823 cri.go:89] found id: ""
	I1210 07:52:48.190056  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.190063  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:48.190069  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:48.190160  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:48.214783  418823 cri.go:89] found id: ""
	I1210 07:52:48.214798  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.214824  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:48.214830  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:48.214899  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:48.242669  418823 cri.go:89] found id: ""
	I1210 07:52:48.242684  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.242692  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:48.242697  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:48.242758  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:48.269761  418823 cri.go:89] found id: ""
	I1210 07:52:48.269776  418823 logs.go:282] 0 containers: []
	W1210 07:52:48.269784  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:48.269791  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:48.269802  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:48.334847  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:48.334871  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:48.349781  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:48.349796  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:48.422853  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:48.408060   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.408898   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.411067   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.411847   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.413714   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:48.408060   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.408898   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.411067   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.411847   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:48.413714   13387 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:48.422867  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:48.422877  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:48.504694  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:48.504717  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:51.036528  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:51.046592  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:51.046665  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:51.073731  418823 cri.go:89] found id: ""
	I1210 07:52:51.073746  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.073753  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:51.073759  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:51.073819  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:51.100005  418823 cri.go:89] found id: ""
	I1210 07:52:51.100019  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.100027  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:51.100031  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:51.100095  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:51.125872  418823 cri.go:89] found id: ""
	I1210 07:52:51.125897  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.125905  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:51.125910  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:51.125970  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:51.151761  418823 cri.go:89] found id: ""
	I1210 07:52:51.151775  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.151783  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:51.151788  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:51.151846  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:51.178046  418823 cri.go:89] found id: ""
	I1210 07:52:51.178060  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.178068  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:51.178074  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:51.178143  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:51.205729  418823 cri.go:89] found id: ""
	I1210 07:52:51.205743  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.205750  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:51.205756  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:51.205813  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:51.231485  418823 cri.go:89] found id: ""
	I1210 07:52:51.231498  418823 logs.go:282] 0 containers: []
	W1210 07:52:51.231505  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:51.231512  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:51.231522  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:51.295749  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:51.295769  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:51.310814  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:51.310832  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:51.374238  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:51.365806   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.366220   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.367678   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.368628   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.370186   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:51.365806   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.366220   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.367678   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.368628   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:51.370186   13493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:51.374248  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:51.374260  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:51.442190  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:51.442209  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:53.979674  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:53.989805  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:53.989873  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:54.022480  418823 cri.go:89] found id: ""
	I1210 07:52:54.022494  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.022501  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:54.022507  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:54.022571  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:54.049837  418823 cri.go:89] found id: ""
	I1210 07:52:54.049851  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.049858  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:54.049864  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:54.049924  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:54.079149  418823 cri.go:89] found id: ""
	I1210 07:52:54.079164  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.079172  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:54.079177  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:54.079244  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:54.110317  418823 cri.go:89] found id: ""
	I1210 07:52:54.110332  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.110339  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:54.110344  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:54.110401  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:54.137776  418823 cri.go:89] found id: ""
	I1210 07:52:54.137798  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.137806  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:54.137812  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:54.137873  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:54.162601  418823 cri.go:89] found id: ""
	I1210 07:52:54.162615  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.162622  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:54.162629  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:54.162690  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:54.188677  418823 cri.go:89] found id: ""
	I1210 07:52:54.188691  418823 logs.go:282] 0 containers: []
	W1210 07:52:54.188698  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:54.188706  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:54.188720  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:54.255918  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:54.255940  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:54.270493  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:54.270513  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:54.347104  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:54.330795   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.331491   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.333174   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.333794   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.335404   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:54.330795   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.331491   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.333174   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.333794   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:54.335404   13598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:54.347114  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:54.347127  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:54.415651  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:54.415676  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:56.950504  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:56.960908  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:56.960974  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:56.986942  418823 cri.go:89] found id: ""
	I1210 07:52:56.986957  418823 logs.go:282] 0 containers: []
	W1210 07:52:56.986964  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:56.986969  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:56.987046  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:57.014060  418823 cri.go:89] found id: ""
	I1210 07:52:57.014088  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.014095  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:57.014100  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:57.014192  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:57.040046  418823 cri.go:89] found id: ""
	I1210 07:52:57.040061  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.040069  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:57.040075  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:57.040139  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:52:57.065400  418823 cri.go:89] found id: ""
	I1210 07:52:57.065427  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.065435  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:52:57.065441  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:52:57.065511  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:52:57.094105  418823 cri.go:89] found id: ""
	I1210 07:52:57.094127  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.094135  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:52:57.094140  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:52:57.094203  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:52:57.120409  418823 cri.go:89] found id: ""
	I1210 07:52:57.120425  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.120432  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:52:57.120438  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:52:57.120498  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:52:57.146119  418823 cri.go:89] found id: ""
	I1210 07:52:57.146134  418823 logs.go:282] 0 containers: []
	W1210 07:52:57.146142  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:52:57.146150  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:52:57.146160  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:52:57.160510  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:52:57.160526  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:52:57.225221  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:52:57.216448   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.217062   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.218858   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.219548   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.221235   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:52:57.216448   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.217062   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.218858   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.219548   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:52:57.221235   13705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:52:57.225232  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:52:57.225253  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:52:57.293765  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:52:57.293785  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:52:57.326044  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:52:57.326061  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:52:59.896294  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:52:59.906460  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:52:59.906522  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:52:59.930908  418823 cri.go:89] found id: ""
	I1210 07:52:59.930922  418823 logs.go:282] 0 containers: []
	W1210 07:52:59.930930  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:52:59.930935  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:52:59.930999  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:52:59.956028  418823 cri.go:89] found id: ""
	I1210 07:52:59.956042  418823 logs.go:282] 0 containers: []
	W1210 07:52:59.956049  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:52:59.956054  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:52:59.956120  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:52:59.981032  418823 cri.go:89] found id: ""
	I1210 07:52:59.981046  418823 logs.go:282] 0 containers: []
	W1210 07:52:59.981053  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:52:59.981058  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:52:59.981116  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:00.027952  418823 cri.go:89] found id: ""
	I1210 07:53:00.027967  418823 logs.go:282] 0 containers: []
	W1210 07:53:00.027975  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:00.027981  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:00.028053  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:00.149242  418823 cri.go:89] found id: ""
	I1210 07:53:00.149275  418823 logs.go:282] 0 containers: []
	W1210 07:53:00.149301  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:00.149308  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:00.149381  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:00.205658  418823 cri.go:89] found id: ""
	I1210 07:53:00.205676  418823 logs.go:282] 0 containers: []
	W1210 07:53:00.205684  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:00.205691  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:00.205842  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:00.272868  418823 cri.go:89] found id: ""
	I1210 07:53:00.272884  418823 logs.go:282] 0 containers: []
	W1210 07:53:00.272892  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:00.272901  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:00.272914  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:00.364734  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:00.349008   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.349679   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.351763   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.352235   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.354014   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:00.349008   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.349679   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.351763   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.352235   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:00.354014   13803 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:00.364745  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:00.364757  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:00.441561  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:00.441581  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:00.486703  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:00.486722  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:00.551636  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:00.551658  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:03.068015  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:03.078410  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:03.078481  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:03.103362  418823 cri.go:89] found id: ""
	I1210 07:53:03.103378  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.103385  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:03.103391  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:03.103451  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:03.129650  418823 cri.go:89] found id: ""
	I1210 07:53:03.129668  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.129676  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:03.129681  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:03.129753  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:03.156057  418823 cri.go:89] found id: ""
	I1210 07:53:03.156072  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.156079  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:03.156085  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:03.156143  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:03.181869  418823 cri.go:89] found id: ""
	I1210 07:53:03.181895  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.181903  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:03.181908  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:03.181976  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:03.210043  418823 cri.go:89] found id: ""
	I1210 07:53:03.210056  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.210064  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:03.210069  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:03.210148  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:03.234991  418823 cri.go:89] found id: ""
	I1210 07:53:03.235006  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.235046  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:03.235051  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:03.235119  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:03.261578  418823 cri.go:89] found id: ""
	I1210 07:53:03.261605  418823 logs.go:282] 0 containers: []
	W1210 07:53:03.261612  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:03.261620  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:03.261630  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:03.326335  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:03.326355  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:03.340836  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:03.340853  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:03.407609  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:03.398750   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.399755   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.401402   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.401872   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.403636   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:03.398750   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.399755   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.401402   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.401872   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:03.403636   13915 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:03.407623  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:03.407637  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:03.494941  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:03.494964  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:06.031492  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:06.042260  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:06.042330  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:06.069383  418823 cri.go:89] found id: ""
	I1210 07:53:06.069398  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.069405  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:06.069410  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:06.069471  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:06.095692  418823 cri.go:89] found id: ""
	I1210 07:53:06.095706  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.095713  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:06.095718  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:06.095783  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:06.122565  418823 cri.go:89] found id: ""
	I1210 07:53:06.122579  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.122585  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:06.122590  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:06.122647  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:06.147461  418823 cri.go:89] found id: ""
	I1210 07:53:06.147476  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.147483  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:06.147489  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:06.147549  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:06.172221  418823 cri.go:89] found id: ""
	I1210 07:53:06.172235  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.172243  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:06.172248  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:06.172306  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:06.200403  418823 cri.go:89] found id: ""
	I1210 07:53:06.200417  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.200424  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:06.200429  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:06.200487  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:06.224557  418823 cri.go:89] found id: ""
	I1210 07:53:06.224572  418823 logs.go:282] 0 containers: []
	W1210 07:53:06.224578  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:06.224586  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:06.224597  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:06.285061  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:06.277547   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.278040   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.279487   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.279882   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.281310   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:06.277547   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.278040   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.279487   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.279882   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:06.281310   14017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:06.285071  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:06.285082  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:06.351298  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:06.351317  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:06.379592  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:06.379609  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:06.448278  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:06.448298  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:08.966418  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:08.976886  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:08.976953  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:09.010205  418823 cri.go:89] found id: ""
	I1210 07:53:09.010221  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.010248  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:09.010253  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:09.010336  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:09.039128  418823 cri.go:89] found id: ""
	I1210 07:53:09.039143  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.039150  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:09.039155  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:09.039225  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:09.066093  418823 cri.go:89] found id: ""
	I1210 07:53:09.066108  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.066116  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:09.066121  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:09.066218  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:09.091920  418823 cri.go:89] found id: ""
	I1210 07:53:09.091934  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.091948  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:09.091953  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:09.092014  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:09.118286  418823 cri.go:89] found id: ""
	I1210 07:53:09.118301  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.118309  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:09.118314  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:09.118374  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:09.143614  418823 cri.go:89] found id: ""
	I1210 07:53:09.143628  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.143635  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:09.143641  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:09.143705  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:09.168425  418823 cri.go:89] found id: ""
	I1210 07:53:09.168440  418823 logs.go:282] 0 containers: []
	W1210 07:53:09.168447  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:09.168455  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:09.168465  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:09.236920  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:09.236943  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:09.269085  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:09.269103  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:09.339867  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:09.339886  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:09.354523  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:09.354541  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:09.432066  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:09.420805   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.421538   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.423585   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.424589   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.425304   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:09.420805   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.421538   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.423585   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.424589   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:09.425304   14139 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:11.933763  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:11.943879  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:11.943943  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:11.969555  418823 cri.go:89] found id: ""
	I1210 07:53:11.969578  418823 logs.go:282] 0 containers: []
	W1210 07:53:11.969586  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:11.969591  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:11.969663  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:11.997107  418823 cri.go:89] found id: ""
	I1210 07:53:11.997121  418823 logs.go:282] 0 containers: []
	W1210 07:53:11.997128  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:11.997133  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:11.997198  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:12.025616  418823 cri.go:89] found id: ""
	I1210 07:53:12.025630  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.025638  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:12.025644  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:12.025712  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:12.052893  418823 cri.go:89] found id: ""
	I1210 07:53:12.052906  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.052914  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:12.052919  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:12.052983  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:12.077956  418823 cri.go:89] found id: ""
	I1210 07:53:12.077979  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.077988  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:12.077993  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:12.078064  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:12.104169  418823 cri.go:89] found id: ""
	I1210 07:53:12.104183  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.104200  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:12.104207  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:12.104278  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:12.130790  418823 cri.go:89] found id: ""
	I1210 07:53:12.130804  418823 logs.go:282] 0 containers: []
	W1210 07:53:12.130812  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:12.130819  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:12.130831  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:12.194759  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:12.194778  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:12.209969  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:12.209985  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:12.272708  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:12.265266   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.265649   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.267247   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.267585   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.269013   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:12.265266   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.265649   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.267247   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.267585   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:12.269013   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:12.272718  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:12.272730  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:12.339739  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:12.339759  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:14.870834  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:14.882996  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:14.883096  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:14.912032  418823 cri.go:89] found id: ""
	I1210 07:53:14.912046  418823 logs.go:282] 0 containers: []
	W1210 07:53:14.912053  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:14.912059  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:14.912116  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:14.937034  418823 cri.go:89] found id: ""
	I1210 07:53:14.937048  418823 logs.go:282] 0 containers: []
	W1210 07:53:14.937056  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:14.937061  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:14.937122  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:14.962165  418823 cri.go:89] found id: ""
	I1210 07:53:14.962180  418823 logs.go:282] 0 containers: []
	W1210 07:53:14.962187  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:14.962192  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:14.962256  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:14.987169  418823 cri.go:89] found id: ""
	I1210 07:53:14.987182  418823 logs.go:282] 0 containers: []
	W1210 07:53:14.987190  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:14.987194  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:14.987250  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:15.026690  418823 cri.go:89] found id: ""
	I1210 07:53:15.026706  418823 logs.go:282] 0 containers: []
	W1210 07:53:15.026714  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:15.026719  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:15.026788  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:15.057882  418823 cri.go:89] found id: ""
	I1210 07:53:15.057896  418823 logs.go:282] 0 containers: []
	W1210 07:53:15.057903  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:15.057908  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:15.057977  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:15.084042  418823 cri.go:89] found id: ""
	I1210 07:53:15.084057  418823 logs.go:282] 0 containers: []
	W1210 07:53:15.084064  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:15.084072  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:15.084082  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:15.114864  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:15.114880  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:15.179901  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:15.179922  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:15.194821  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:15.194838  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:15.259725  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:15.250726   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.251670   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.253338   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.254117   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.255652   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:15.250726   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.251670   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.253338   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.254117   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:15.255652   14346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:15.259735  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:15.259747  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:17.826809  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:17.837193  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:17.837254  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:17.863390  418823 cri.go:89] found id: ""
	I1210 07:53:17.863404  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.863411  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:17.863416  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:17.863481  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:17.893221  418823 cri.go:89] found id: ""
	I1210 07:53:17.893236  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.893243  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:17.893248  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:17.893306  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:17.921130  418823 cri.go:89] found id: ""
	I1210 07:53:17.921155  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.921163  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:17.921168  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:17.921236  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:17.945888  418823 cri.go:89] found id: ""
	I1210 07:53:17.945901  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.945909  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:17.945914  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:17.945972  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:17.970988  418823 cri.go:89] found id: ""
	I1210 07:53:17.971002  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.971022  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:17.971027  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:17.971097  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:17.996399  418823 cri.go:89] found id: ""
	I1210 07:53:17.996413  418823 logs.go:282] 0 containers: []
	W1210 07:53:17.996420  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:17.996425  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:17.996494  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:18.023886  418823 cri.go:89] found id: ""
	I1210 07:53:18.023900  418823 logs.go:282] 0 containers: []
	W1210 07:53:18.023908  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:18.023931  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:18.023947  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:18.090117  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:18.090136  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:18.105261  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:18.105280  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:18.174300  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:18.166057   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.166727   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.168471   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.168892   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.170326   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:18.166057   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.166727   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.168471   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.168892   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:18.170326   14439 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:18.174310  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:18.174322  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:18.241759  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:18.241779  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:20.779144  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:20.788940  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:20.788999  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:20.814543  418823 cri.go:89] found id: ""
	I1210 07:53:20.814557  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.814564  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:20.814569  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:20.814634  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:20.839723  418823 cri.go:89] found id: ""
	I1210 07:53:20.839737  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.839744  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:20.839749  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:20.839808  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:20.869222  418823 cri.go:89] found id: ""
	I1210 07:53:20.869237  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.869244  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:20.869249  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:20.869310  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:20.893562  418823 cri.go:89] found id: ""
	I1210 07:53:20.893576  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.893593  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:20.893598  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:20.893664  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:20.919439  418823 cri.go:89] found id: ""
	I1210 07:53:20.919454  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.919461  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:20.919466  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:20.919526  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:20.947602  418823 cri.go:89] found id: ""
	I1210 07:53:20.947617  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.947624  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:20.947629  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:20.947688  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:20.976621  418823 cri.go:89] found id: ""
	I1210 07:53:20.976635  418823 logs.go:282] 0 containers: []
	W1210 07:53:20.976642  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:20.976650  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:20.976666  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:21.040860  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:21.040884  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:21.055749  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:21.055767  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:21.122414  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:21.114763   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.115197   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.116691   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.116998   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.118463   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:21.114763   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.115197   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.116691   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.116998   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:21.118463   14545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:21.122458  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:21.122468  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:21.188312  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:21.188333  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:23.717609  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:23.730817  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:23.730882  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:23.756488  418823 cri.go:89] found id: ""
	I1210 07:53:23.756504  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.756512  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:23.756518  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:23.756584  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:23.782540  418823 cri.go:89] found id: ""
	I1210 07:53:23.782555  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.782562  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:23.782567  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:23.782626  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:23.807181  418823 cri.go:89] found id: ""
	I1210 07:53:23.807195  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.807204  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:23.807209  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:23.807273  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:23.831876  418823 cri.go:89] found id: ""
	I1210 07:53:23.831891  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.831900  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:23.831905  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:23.831964  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:23.858557  418823 cri.go:89] found id: ""
	I1210 07:53:23.858572  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.858580  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:23.858585  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:23.858646  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:23.883797  418823 cri.go:89] found id: ""
	I1210 07:53:23.883811  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.883820  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:23.883825  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:23.883922  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:23.913668  418823 cri.go:89] found id: ""
	I1210 07:53:23.913682  418823 logs.go:282] 0 containers: []
	W1210 07:53:23.913690  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:23.913698  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:23.913709  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:23.977126  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:23.968779   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.969424   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.971050   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.971601   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.973232   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:23.968779   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.969424   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.971050   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.971601   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:23.973232   14645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:23.977136  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:23.977147  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:24.045089  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:24.045110  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:24.076143  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:24.076161  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:24.142779  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:24.142798  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:26.658408  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:26.669312  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:26.669374  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:26.697592  418823 cri.go:89] found id: ""
	I1210 07:53:26.697607  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.697615  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:26.697621  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:26.697687  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:26.725323  418823 cri.go:89] found id: ""
	I1210 07:53:26.725363  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.725370  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:26.725375  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:26.725433  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:26.754039  418823 cri.go:89] found id: ""
	I1210 07:53:26.754053  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.754060  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:26.754066  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:26.754122  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:26.788322  418823 cri.go:89] found id: ""
	I1210 07:53:26.788337  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.788344  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:26.788349  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:26.788408  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:26.818143  418823 cri.go:89] found id: ""
	I1210 07:53:26.818157  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.818180  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:26.818185  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:26.818246  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:26.845686  418823 cri.go:89] found id: ""
	I1210 07:53:26.845699  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.845707  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:26.845714  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:26.845772  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:26.871522  418823 cri.go:89] found id: ""
	I1210 07:53:26.871536  418823 logs.go:282] 0 containers: []
	W1210 07:53:26.871544  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:26.871552  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:26.871568  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:26.902527  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:26.902544  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:26.967583  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:26.967603  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:26.982258  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:26.982275  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:27.053700  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:27.045382   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.046023   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.047684   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.048159   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.049667   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:27.045382   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.046023   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.047684   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.048159   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:27.049667   14766 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:27.053710  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:27.053722  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:29.623259  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:29.633196  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:29.633265  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:29.658246  418823 cri.go:89] found id: ""
	I1210 07:53:29.658271  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.658278  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:29.658283  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:29.658358  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:29.685747  418823 cri.go:89] found id: ""
	I1210 07:53:29.685762  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.685769  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:29.685775  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:29.685842  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:29.721266  418823 cri.go:89] found id: ""
	I1210 07:53:29.721280  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.721288  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:29.721292  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:29.721350  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:29.746632  418823 cri.go:89] found id: ""
	I1210 07:53:29.746647  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.746655  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:29.746660  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:29.746718  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:29.771709  418823 cri.go:89] found id: ""
	I1210 07:53:29.771725  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.771732  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:29.771737  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:29.771800  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:29.801580  418823 cri.go:89] found id: ""
	I1210 07:53:29.801595  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.801602  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:29.801608  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:29.801673  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:29.827750  418823 cri.go:89] found id: ""
	I1210 07:53:29.827764  418823 logs.go:282] 0 containers: []
	W1210 07:53:29.827771  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:29.827780  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:29.827795  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:29.893437  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:29.885346   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.885722   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.887338   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.887997   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.889654   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:29.885346   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.885722   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.887338   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.887997   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:29.889654   14854 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:29.893447  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:29.893458  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:29.960399  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:29.960419  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:29.991781  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:29.991799  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:30.072819  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:30.072841  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:32.588396  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:32.598821  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:32.598882  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:32.628590  418823 cri.go:89] found id: ""
	I1210 07:53:32.628604  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.628611  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:32.628616  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:32.628678  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:32.658338  418823 cri.go:89] found id: ""
	I1210 07:53:32.658352  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.658359  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:32.658364  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:32.658424  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:32.701705  418823 cri.go:89] found id: ""
	I1210 07:53:32.701719  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.701727  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:32.701732  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:32.701792  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:32.735461  418823 cri.go:89] found id: ""
	I1210 07:53:32.735476  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.735483  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:32.735488  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:32.735548  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:32.761096  418823 cri.go:89] found id: ""
	I1210 07:53:32.761109  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.761116  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:32.761121  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:32.761180  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:32.787468  418823 cri.go:89] found id: ""
	I1210 07:53:32.787481  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.787488  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:32.787493  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:32.787553  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:32.813085  418823 cri.go:89] found id: ""
	I1210 07:53:32.813098  418823 logs.go:282] 0 containers: []
	W1210 07:53:32.813105  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:32.813113  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:32.813123  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:32.881504  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:32.873323   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.874057   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.875714   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.876057   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.877280   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:32.873323   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.874057   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.875714   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.876057   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:32.877280   14957 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:32.881541  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:32.881552  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:32.951245  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:32.951265  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:32.980096  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:32.980113  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:33.046381  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:33.046400  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:35.561454  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:35.571515  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:35.571579  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:35.596461  418823 cri.go:89] found id: ""
	I1210 07:53:35.596476  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.596483  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:35.596488  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:35.596547  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:35.623764  418823 cri.go:89] found id: ""
	I1210 07:53:35.623780  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.623787  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:35.623792  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:35.623852  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:35.649136  418823 cri.go:89] found id: ""
	I1210 07:53:35.649150  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.649159  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:35.649164  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:35.649267  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:35.689785  418823 cri.go:89] found id: ""
	I1210 07:53:35.689799  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.689806  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:35.689820  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:35.689883  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:35.717073  418823 cri.go:89] found id: ""
	I1210 07:53:35.717086  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.717104  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:35.717109  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:35.717167  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:35.747852  418823 cri.go:89] found id: ""
	I1210 07:53:35.747866  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.747874  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:35.747879  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:35.747936  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:35.772479  418823 cri.go:89] found id: ""
	I1210 07:53:35.772493  418823 logs.go:282] 0 containers: []
	W1210 07:53:35.772500  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:35.772508  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:35.772519  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:35.843052  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:35.843075  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:35.857842  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:35.857859  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:35.927434  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:35.918703   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.919740   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.921421   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.921929   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.923509   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:35.918703   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.919740   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.921421   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.921929   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:35.923509   15070 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:35.927445  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:35.927457  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:35.996278  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:35.996299  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:38.532848  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:38.543645  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:38.543706  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:38.573367  418823 cri.go:89] found id: ""
	I1210 07:53:38.573382  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.573389  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:38.573394  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:38.573456  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:38.603108  418823 cri.go:89] found id: ""
	I1210 07:53:38.603122  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.603129  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:38.603134  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:38.603193  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:38.629381  418823 cri.go:89] found id: ""
	I1210 07:53:38.629395  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.629402  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:38.629407  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:38.629467  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:38.662313  418823 cri.go:89] found id: ""
	I1210 07:53:38.662327  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.662334  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:38.662339  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:38.662402  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:38.704257  418823 cri.go:89] found id: ""
	I1210 07:53:38.704271  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.704279  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:38.704284  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:38.704346  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:38.734287  418823 cri.go:89] found id: ""
	I1210 07:53:38.734302  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.734309  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:38.734315  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:38.734375  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:38.760452  418823 cri.go:89] found id: ""
	I1210 07:53:38.760467  418823 logs.go:282] 0 containers: []
	W1210 07:53:38.760474  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:38.760483  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:38.760493  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:38.827227  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:38.827248  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:38.841994  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:38.842011  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:38.909535  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:38.899398   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.900949   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.901647   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.903592   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.904308   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:38.899398   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.900949   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.901647   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.903592   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:38.904308   15174 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:38.909548  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:38.909559  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:38.977890  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:38.977912  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:41.514495  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:41.524880  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:41.524939  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:41.550178  418823 cri.go:89] found id: ""
	I1210 07:53:41.550208  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.550216  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:41.550220  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:41.550289  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:41.578068  418823 cri.go:89] found id: ""
	I1210 07:53:41.578090  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.578097  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:41.578102  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:41.578175  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:41.603754  418823 cri.go:89] found id: ""
	I1210 07:53:41.603768  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.603776  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:41.603782  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:41.603840  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:41.628986  418823 cri.go:89] found id: ""
	I1210 07:53:41.629000  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.629008  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:41.629013  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:41.629072  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:41.654287  418823 cri.go:89] found id: ""
	I1210 07:53:41.654302  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.654309  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:41.654314  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:41.654384  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:41.688416  418823 cri.go:89] found id: ""
	I1210 07:53:41.688430  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.688437  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:41.688442  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:41.688498  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:41.713499  418823 cri.go:89] found id: ""
	I1210 07:53:41.713513  418823 logs.go:282] 0 containers: []
	W1210 07:53:41.713521  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:41.713528  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:41.713538  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:41.730410  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:41.730426  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:41.799336  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:41.790498   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.791386   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.793149   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.793773   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.794989   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:41.790498   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.791386   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.793149   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.793773   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:41.794989   15280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:41.799346  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:41.799357  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:41.867347  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:41.867369  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:41.895652  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:41.895669  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:44.462932  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:44.472795  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:44.472854  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:44.504932  418823 cri.go:89] found id: ""
	I1210 07:53:44.504947  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.504960  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:44.504965  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:44.505025  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:44.535103  418823 cri.go:89] found id: ""
	I1210 07:53:44.535125  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.535133  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:44.535138  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:44.535204  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:44.560225  418823 cri.go:89] found id: ""
	I1210 07:53:44.560239  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.560247  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:44.560252  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:44.560310  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:44.585575  418823 cri.go:89] found id: ""
	I1210 07:53:44.585597  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.585604  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:44.585609  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:44.585668  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:44.611737  418823 cri.go:89] found id: ""
	I1210 07:53:44.611751  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.611758  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:44.611763  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:44.611824  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:44.636495  418823 cri.go:89] found id: ""
	I1210 07:53:44.636510  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.636517  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:44.636522  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:44.636580  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:44.665441  418823 cri.go:89] found id: ""
	I1210 07:53:44.665455  418823 logs.go:282] 0 containers: []
	W1210 07:53:44.665463  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:44.665471  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:44.665481  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:44.702032  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:44.702048  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:44.776362  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:44.776383  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:44.792240  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:44.792256  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:44.854270  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:44.846404   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.846945   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.848504   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.848953   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.850457   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:44.846404   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.846945   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.848504   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.848953   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:44.850457   15397 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:44.854279  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:44.854291  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:47.423978  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:47.436858  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:47.436919  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:47.461997  418823 cri.go:89] found id: ""
	I1210 07:53:47.462011  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.462018  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:47.462023  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:47.462125  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:47.487419  418823 cri.go:89] found id: ""
	I1210 07:53:47.487434  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.487441  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:47.487446  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:47.487504  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:47.512823  418823 cri.go:89] found id: ""
	I1210 07:53:47.512837  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.512845  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:47.512850  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:47.512913  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:47.538819  418823 cri.go:89] found id: ""
	I1210 07:53:47.538833  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.538840  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:47.538845  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:47.538903  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:47.563454  418823 cri.go:89] found id: ""
	I1210 07:53:47.563468  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.563476  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:47.563481  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:47.563544  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:47.588347  418823 cri.go:89] found id: ""
	I1210 07:53:47.588361  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.588368  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:47.588374  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:47.588435  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:47.613835  418823 cri.go:89] found id: ""
	I1210 07:53:47.613848  418823 logs.go:282] 0 containers: []
	W1210 07:53:47.613855  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:47.613863  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:47.613874  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:47.679468  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:47.679488  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:47.695124  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:47.695148  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:47.764330  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:47.755921   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.756582   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.758176   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.758693   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.760262   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:47.755921   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.756582   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.758176   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.758693   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:47.760262   15493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:47.764340  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:47.764350  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:47.834926  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:47.834946  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:50.366762  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:50.376894  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:50.376958  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:50.402825  418823 cri.go:89] found id: ""
	I1210 07:53:50.402839  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.402846  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:50.402851  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:50.402912  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:50.431663  418823 cri.go:89] found id: ""
	I1210 07:53:50.431677  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.431685  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:50.431690  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:50.431748  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:50.458799  418823 cri.go:89] found id: ""
	I1210 07:53:50.458813  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.458821  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:50.458826  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:50.458885  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:50.483609  418823 cri.go:89] found id: ""
	I1210 07:53:50.483623  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.483630  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:50.483635  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:50.483693  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:50.509720  418823 cri.go:89] found id: ""
	I1210 07:53:50.509735  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.509743  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:50.509748  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:50.509808  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:50.535475  418823 cri.go:89] found id: ""
	I1210 07:53:50.535489  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.535496  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:50.535501  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:50.535560  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:50.559559  418823 cri.go:89] found id: ""
	I1210 07:53:50.559572  418823 logs.go:282] 0 containers: []
	W1210 07:53:50.559580  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:50.559587  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:50.559598  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:50.624409  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:50.624430  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:50.639099  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:50.639117  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:50.734659  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:50.717698   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.718879   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.720716   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.721025   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.727758   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:50.717698   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.718879   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.720716   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.721025   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:50.727758   15591 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:50.734673  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:50.734686  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:50.801764  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:50.801789  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:53.334554  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:53.344704  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:53.344767  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:53.369027  418823 cri.go:89] found id: ""
	I1210 07:53:53.369041  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.369049  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:53.369054  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:53.369112  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:53.392884  418823 cri.go:89] found id: ""
	I1210 07:53:53.392897  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.392904  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:53.392909  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:53.392967  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:53.421604  418823 cri.go:89] found id: ""
	I1210 07:53:53.421618  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.421625  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:53.421630  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:53.421690  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:53.446954  418823 cri.go:89] found id: ""
	I1210 07:53:53.446968  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.446976  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:53.446982  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:53.447078  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:53.472681  418823 cri.go:89] found id: ""
	I1210 07:53:53.472696  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.472703  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:53.472708  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:53.472769  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:53.497847  418823 cri.go:89] found id: ""
	I1210 07:53:53.497861  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.497868  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:53.497873  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:53.497934  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:53.524109  418823 cri.go:89] found id: ""
	I1210 07:53:53.524123  418823 logs.go:282] 0 containers: []
	W1210 07:53:53.524131  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:53.524138  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:53.524149  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:53.593506  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:53.593527  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:53.607933  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:53.607950  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:53.678735  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:53.669132   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.671186   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.671527   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.673119   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.673744   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:53.669132   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.671186   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.671527   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.673119   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:53.673744   15698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:53.678745  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:53.678755  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:53.752843  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:53.752865  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:56.287368  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:56.297545  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:56.297605  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:56.327438  418823 cri.go:89] found id: ""
	I1210 07:53:56.327452  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.327459  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:56.327465  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:56.327525  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:56.357601  418823 cri.go:89] found id: ""
	I1210 07:53:56.357616  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.357623  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:56.357627  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:56.357686  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:56.382796  418823 cri.go:89] found id: ""
	I1210 07:53:56.382810  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.382817  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:56.382822  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:56.382878  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:56.410018  418823 cri.go:89] found id: ""
	I1210 07:53:56.410032  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.410039  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:56.410050  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:56.410110  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:56.437449  418823 cri.go:89] found id: ""
	I1210 07:53:56.437472  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.437480  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:56.437485  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:56.437551  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:56.462063  418823 cri.go:89] found id: ""
	I1210 07:53:56.462077  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.462096  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:56.462102  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:56.462178  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:56.489728  418823 cri.go:89] found id: ""
	I1210 07:53:56.489743  418823 logs.go:282] 0 containers: []
	W1210 07:53:56.489750  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:56.489757  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:56.489771  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:56.504129  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:56.504145  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:56.569498  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:56.561896   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.562381   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.563902   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.564206   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.565675   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:56.561896   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.562381   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.563902   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.564206   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:56.565675   15804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:53:56.569507  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:56.569518  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:56.638285  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:56.638304  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:56.676473  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:56.676490  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:59.250249  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:53:59.260346  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:53:59.260407  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:53:59.288615  418823 cri.go:89] found id: ""
	I1210 07:53:59.288633  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.288640  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:53:59.288645  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:53:59.288707  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:53:59.314559  418823 cri.go:89] found id: ""
	I1210 07:53:59.314574  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.314581  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:53:59.314586  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:53:59.314652  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:53:59.339212  418823 cri.go:89] found id: ""
	I1210 07:53:59.339227  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.339235  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:53:59.339240  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:53:59.339296  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:53:59.365478  418823 cri.go:89] found id: ""
	I1210 07:53:59.365493  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.365500  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:53:59.365505  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:53:59.365565  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:53:59.391116  418823 cri.go:89] found id: ""
	I1210 07:53:59.391131  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.391138  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:53:59.391143  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:53:59.391204  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:53:59.417133  418823 cri.go:89] found id: ""
	I1210 07:53:59.417153  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.417161  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:53:59.417166  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:53:59.417225  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:53:59.442940  418823 cri.go:89] found id: ""
	I1210 07:53:59.442954  418823 logs.go:282] 0 containers: []
	W1210 07:53:59.442961  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:53:59.442968  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:53:59.442979  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:53:59.509257  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:53:59.509277  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:53:59.541319  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:53:59.541335  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:53:59.607451  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:53:59.607470  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:53:59.621934  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:53:59.621951  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:53:59.693437  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:53:59.684845   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.685597   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.687307   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.687819   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.689392   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:53:59.684845   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.685597   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.687307   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.687819   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:53:59.689392   15923 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:02.193693  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:02.204795  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:02.204860  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:02.230168  418823 cri.go:89] found id: ""
	I1210 07:54:02.230185  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.230192  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:02.230198  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:02.230311  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:02.263333  418823 cri.go:89] found id: ""
	I1210 07:54:02.263349  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.263356  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:02.263361  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:02.263426  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:02.290361  418823 cri.go:89] found id: ""
	I1210 07:54:02.290376  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.290384  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:02.290388  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:02.290448  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:02.316861  418823 cri.go:89] found id: ""
	I1210 07:54:02.316875  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.316882  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:02.316894  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:02.316951  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:02.343227  418823 cri.go:89] found id: ""
	I1210 07:54:02.343242  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.343250  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:02.343255  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:02.343319  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:02.370541  418823 cri.go:89] found id: ""
	I1210 07:54:02.370555  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.370562  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:02.370567  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:02.370655  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:02.397479  418823 cri.go:89] found id: ""
	I1210 07:54:02.397493  418823 logs.go:282] 0 containers: []
	W1210 07:54:02.397500  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:02.397508  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:02.397522  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:02.463725  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:02.463746  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:02.478295  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:02.478312  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:02.550548  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:02.542348   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.542913   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.544446   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.544888   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.546428   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:02.542348   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.542913   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.544446   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.544888   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:02.546428   16014 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:02.550558  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:02.550569  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:02.620103  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:02.620125  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:05.149959  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:05.160417  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:05.160482  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:05.189797  418823 cri.go:89] found id: ""
	I1210 07:54:05.189812  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.189826  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:05.189831  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:05.189890  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:05.217788  418823 cri.go:89] found id: ""
	I1210 07:54:05.217815  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.217823  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:05.217828  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:05.217893  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:05.243664  418823 cri.go:89] found id: ""
	I1210 07:54:05.243678  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.243686  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:05.243690  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:05.243749  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:05.269052  418823 cri.go:89] found id: ""
	I1210 07:54:05.269067  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.269075  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:05.269080  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:05.269140  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:05.294538  418823 cri.go:89] found id: ""
	I1210 07:54:05.294552  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.294559  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:05.294564  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:05.294627  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:05.321865  418823 cri.go:89] found id: ""
	I1210 07:54:05.321880  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.321887  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:05.321893  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:05.321954  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:05.348181  418823 cri.go:89] found id: ""
	I1210 07:54:05.348195  418823 logs.go:282] 0 containers: []
	W1210 07:54:05.348203  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:05.348210  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:05.348225  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:05.379036  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:05.379062  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:05.443960  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:05.443981  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:05.458603  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:05.458620  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:05.526883  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:05.519093   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.519877   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.521585   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.522069   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.523123   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:05.519093   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.519877   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.521585   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.522069   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:05.523123   16137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:05.526895  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:05.526910  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:08.095997  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:08.105932  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:08.105991  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:08.130974  418823 cri.go:89] found id: ""
	I1210 07:54:08.130988  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.130996  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:08.131001  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:08.131153  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:08.155374  418823 cri.go:89] found id: ""
	I1210 07:54:08.155388  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.155396  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:08.155401  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:08.155458  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:08.180878  418823 cri.go:89] found id: ""
	I1210 07:54:08.180892  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.180899  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:08.180904  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:08.180962  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:08.209651  418823 cri.go:89] found id: ""
	I1210 07:54:08.209664  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.209672  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:08.209676  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:08.209735  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:08.235331  418823 cri.go:89] found id: ""
	I1210 07:54:08.235344  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.235358  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:08.235362  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:08.235421  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:08.260980  418823 cri.go:89] found id: ""
	I1210 07:54:08.260995  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.261003  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:08.261008  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:08.261066  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:08.286809  418823 cri.go:89] found id: ""
	I1210 07:54:08.286824  418823 logs.go:282] 0 containers: []
	W1210 07:54:08.286831  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:08.286838  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:08.286848  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:08.353470  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:08.353491  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:08.367911  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:08.367928  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:08.434091  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:08.426418   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.426959   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.428418   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.428869   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.430318   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:08.426418   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.426959   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.428418   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.428869   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:08.430318   16229 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:08.434101  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:08.434120  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:08.502201  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:08.502221  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:11.031209  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:11.041439  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:11.041500  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:11.067253  418823 cri.go:89] found id: ""
	I1210 07:54:11.067268  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.067275  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:11.067280  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:11.067339  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:11.092951  418823 cri.go:89] found id: ""
	I1210 07:54:11.092965  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.092972  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:11.092978  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:11.093038  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:11.118430  418823 cri.go:89] found id: ""
	I1210 07:54:11.118445  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.118453  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:11.118458  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:11.118520  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:11.144820  418823 cri.go:89] found id: ""
	I1210 07:54:11.144835  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.144843  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:11.144848  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:11.144914  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:11.173374  418823 cri.go:89] found id: ""
	I1210 07:54:11.173388  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.173396  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:11.173401  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:11.173459  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:11.198352  418823 cri.go:89] found id: ""
	I1210 07:54:11.198367  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.198375  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:11.198380  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:11.198450  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:11.224536  418823 cri.go:89] found id: ""
	I1210 07:54:11.224550  418823 logs.go:282] 0 containers: []
	W1210 07:54:11.224559  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:11.224569  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:11.224579  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:11.290262  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:11.290283  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:11.304639  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:11.304658  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:11.368924  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:11.359948   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.360716   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.362347   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.362996   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.364714   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:11.359948   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.360716   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.362347   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.362996   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:11.364714   16337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:11.368934  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:11.368944  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:11.435589  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:11.435610  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:13.966356  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:13.976957  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:13.977022  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:14.004519  418823 cri.go:89] found id: ""
	I1210 07:54:14.004536  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.004546  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:14.004551  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:14.004633  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:14.033357  418823 cri.go:89] found id: ""
	I1210 07:54:14.033372  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.033380  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:14.033385  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:14.033445  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:14.059488  418823 cri.go:89] found id: ""
	I1210 07:54:14.059510  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.059517  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:14.059522  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:14.059585  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:14.087964  418823 cri.go:89] found id: ""
	I1210 07:54:14.087987  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.087996  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:14.088002  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:14.088073  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:14.114469  418823 cri.go:89] found id: ""
	I1210 07:54:14.114483  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.114501  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:14.114507  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:14.114580  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:14.144394  418823 cri.go:89] found id: ""
	I1210 07:54:14.144408  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.144415  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:14.144420  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:14.144482  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:14.173724  418823 cri.go:89] found id: ""
	I1210 07:54:14.173746  418823 logs.go:282] 0 containers: []
	W1210 07:54:14.173754  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:14.173762  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:14.173779  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:14.247855  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:14.239156   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.239953   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.241733   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.242311   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.243958   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:14.239156   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.239953   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.241733   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.242311   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:14.243958   16433 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:14.247865  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:14.247879  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:14.317778  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:14.317798  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:14.346568  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:14.346586  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:14.412678  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:14.412697  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:16.927406  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:16.938842  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:16.938903  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:16.972184  418823 cri.go:89] found id: ""
	I1210 07:54:16.972197  418823 logs.go:282] 0 containers: []
	W1210 07:54:16.972204  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:16.972209  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:16.972268  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:16.999114  418823 cri.go:89] found id: ""
	I1210 07:54:16.999129  418823 logs.go:282] 0 containers: []
	W1210 07:54:16.999136  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:16.999141  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:16.999204  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:17.026900  418823 cri.go:89] found id: ""
	I1210 07:54:17.026913  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.026921  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:17.026926  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:17.026985  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:17.053121  418823 cri.go:89] found id: ""
	I1210 07:54:17.053135  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.053143  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:17.053148  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:17.053208  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:17.079184  418823 cri.go:89] found id: ""
	I1210 07:54:17.079198  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.079204  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:17.079209  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:17.079268  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:17.104597  418823 cri.go:89] found id: ""
	I1210 07:54:17.104611  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.104619  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:17.104624  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:17.104681  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:17.133412  418823 cri.go:89] found id: ""
	I1210 07:54:17.133426  418823 logs.go:282] 0 containers: []
	W1210 07:54:17.133434  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:17.133441  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:17.133452  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:17.147432  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:17.147452  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:17.210612  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:17.202657   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.203522   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.205130   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.205458   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.206975   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:17.202657   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.203522   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.205130   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.205458   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:17.206975   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:17.210623  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:17.210634  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:17.279473  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:17.279493  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:17.307828  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:17.307852  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:19.881299  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:19.891315  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:19.891375  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:19.926287  418823 cri.go:89] found id: ""
	I1210 07:54:19.926302  418823 logs.go:282] 0 containers: []
	W1210 07:54:19.926309  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:19.926314  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:19.926373  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:19.961020  418823 cri.go:89] found id: ""
	I1210 07:54:19.961036  418823 logs.go:282] 0 containers: []
	W1210 07:54:19.961043  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:19.961048  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:19.961111  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:19.994369  418823 cri.go:89] found id: ""
	I1210 07:54:19.994383  418823 logs.go:282] 0 containers: []
	W1210 07:54:19.994390  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:19.994395  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:19.994455  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:20.028896  418823 cri.go:89] found id: ""
	I1210 07:54:20.028911  418823 logs.go:282] 0 containers: []
	W1210 07:54:20.028919  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:20.028924  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:20.028989  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:20.059934  418823 cri.go:89] found id: ""
	I1210 07:54:20.059955  418823 logs.go:282] 0 containers: []
	W1210 07:54:20.059963  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:20.060015  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:20.060093  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:20.086606  418823 cri.go:89] found id: ""
	I1210 07:54:20.086622  418823 logs.go:282] 0 containers: []
	W1210 07:54:20.086629  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:20.086635  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:20.086703  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:20.112469  418823 cri.go:89] found id: ""
	I1210 07:54:20.112486  418823 logs.go:282] 0 containers: []
	W1210 07:54:20.112496  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:20.112504  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:20.112515  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:20.176933  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:20.176953  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:20.193125  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:20.193142  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:20.257603  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:20.249385   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.249848   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.251552   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.251918   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.253488   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:20.249385   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.249848   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.251552   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.251918   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:20.253488   16651 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:20.257614  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:20.257625  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:20.324617  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:20.324638  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:22.853766  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:22.864101  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:22.864164  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:22.888959  418823 cri.go:89] found id: ""
	I1210 07:54:22.888974  418823 logs.go:282] 0 containers: []
	W1210 07:54:22.888981  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:22.888986  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:22.889046  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:22.921447  418823 cri.go:89] found id: ""
	I1210 07:54:22.921460  418823 logs.go:282] 0 containers: []
	W1210 07:54:22.921468  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:22.921473  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:22.921543  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:22.955505  418823 cri.go:89] found id: ""
	I1210 07:54:22.955519  418823 logs.go:282] 0 containers: []
	W1210 07:54:22.955526  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:22.955531  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:22.955594  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:22.986982  418823 cri.go:89] found id: ""
	I1210 07:54:22.986996  418823 logs.go:282] 0 containers: []
	W1210 07:54:22.987004  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:22.987031  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:22.987094  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:23.016264  418823 cri.go:89] found id: ""
	I1210 07:54:23.016279  418823 logs.go:282] 0 containers: []
	W1210 07:54:23.016286  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:23.016291  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:23.016354  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:23.046460  418823 cri.go:89] found id: ""
	I1210 07:54:23.046474  418823 logs.go:282] 0 containers: []
	W1210 07:54:23.046482  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:23.046507  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:23.046577  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:23.074337  418823 cri.go:89] found id: ""
	I1210 07:54:23.074352  418823 logs.go:282] 0 containers: []
	W1210 07:54:23.074361  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:23.074369  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:23.074384  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:23.139358  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:23.139380  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:23.154211  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:23.154233  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:23.215488  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:23.206516   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.207119   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.208073   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.209680   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.210273   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:23.206516   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.207119   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.208073   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.209680   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:23.210273   16755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:23.215499  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:23.215512  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:23.282950  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:23.282971  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:25.812054  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:25.822192  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:25.822255  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:25.847807  418823 cri.go:89] found id: ""
	I1210 07:54:25.847822  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.847831  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:25.847836  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:25.847900  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:25.876611  418823 cri.go:89] found id: ""
	I1210 07:54:25.876626  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.876634  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:25.876638  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:25.876698  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:25.902947  418823 cri.go:89] found id: ""
	I1210 07:54:25.902961  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.902968  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:25.902973  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:25.903056  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:25.944041  418823 cri.go:89] found id: ""
	I1210 07:54:25.944055  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.944062  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:25.944068  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:25.944128  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:25.970835  418823 cri.go:89] found id: ""
	I1210 07:54:25.970849  418823 logs.go:282] 0 containers: []
	W1210 07:54:25.970857  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:25.970862  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:25.970923  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:26.003198  418823 cri.go:89] found id: ""
	I1210 07:54:26.003214  418823 logs.go:282] 0 containers: []
	W1210 07:54:26.003222  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:26.003228  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:26.003300  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:26.032526  418823 cri.go:89] found id: ""
	I1210 07:54:26.032540  418823 logs.go:282] 0 containers: []
	W1210 07:54:26.032548  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:26.032556  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:26.032569  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:26.099635  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:26.099655  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:26.114354  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:26.114373  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:26.179258  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:26.170492   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.171349   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.173076   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.173401   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.174907   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:26.170492   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.171349   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.173076   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.173401   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:26.174907   16862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:26.179269  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:26.179281  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:26.248336  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:26.248355  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:28.782480  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:28.792391  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:28.792450  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:28.817311  418823 cri.go:89] found id: ""
	I1210 07:54:28.817325  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.817332  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:28.817338  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:28.817393  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:28.841584  418823 cri.go:89] found id: ""
	I1210 07:54:28.841597  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.841605  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:28.841609  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:28.841666  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:28.867004  418823 cri.go:89] found id: ""
	I1210 07:54:28.867040  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.867048  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:28.867052  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:28.867110  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:28.891591  418823 cri.go:89] found id: ""
	I1210 07:54:28.891604  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.891615  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:28.891621  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:28.891677  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:28.927624  418823 cri.go:89] found id: ""
	I1210 07:54:28.927637  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.927645  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:28.927650  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:28.927714  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:28.955409  418823 cri.go:89] found id: ""
	I1210 07:54:28.955423  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.955430  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:28.955435  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:28.955493  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:28.980779  418823 cri.go:89] found id: ""
	I1210 07:54:28.980794  418823 logs.go:282] 0 containers: []
	W1210 07:54:28.980801  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:28.980808  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:28.980819  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:28.995862  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:28.995878  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:29.065674  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:29.057420   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.058224   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.059795   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.060097   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.061321   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:29.057420   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.058224   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.059795   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.060097   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:29.061321   16964 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:29.065683  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:29.065695  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:29.133594  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:29.133615  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:29.165522  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:29.165539  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:31.733707  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:31.743741  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:31.743803  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:31.768618  418823 cri.go:89] found id: ""
	I1210 07:54:31.768633  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.768647  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:31.768652  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:31.768712  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:31.797641  418823 cri.go:89] found id: ""
	I1210 07:54:31.797656  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.797663  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:31.797668  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:31.797729  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:31.823152  418823 cri.go:89] found id: ""
	I1210 07:54:31.823166  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.823174  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:31.823178  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:31.823241  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:31.849644  418823 cri.go:89] found id: ""
	I1210 07:54:31.849659  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.849666  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:31.849671  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:31.849735  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:31.877522  418823 cri.go:89] found id: ""
	I1210 07:54:31.877545  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.877553  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:31.877558  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:31.877625  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:31.903129  418823 cri.go:89] found id: ""
	I1210 07:54:31.903142  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.903150  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:31.903155  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:31.903212  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:31.941362  418823 cri.go:89] found id: ""
	I1210 07:54:31.941376  418823 logs.go:282] 0 containers: []
	W1210 07:54:31.941383  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:31.941391  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:31.941402  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:32.025544  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:32.025566  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:32.040949  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:32.040969  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:32.110721  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:32.101989   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.102773   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.104458   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.104789   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.106701   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:32.101989   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.102773   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.104458   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.104789   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:32.106701   17071 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:32.110732  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:32.110743  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:32.178647  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:32.178670  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:34.707070  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:34.717245  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:34.717310  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:34.745693  418823 cri.go:89] found id: ""
	I1210 07:54:34.745707  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.745714  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:34.745726  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:34.745790  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:34.771395  418823 cri.go:89] found id: ""
	I1210 07:54:34.771409  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.771416  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:34.771421  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:34.771479  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:34.797775  418823 cri.go:89] found id: ""
	I1210 07:54:34.797788  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.797796  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:34.797801  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:34.797861  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:34.825083  418823 cri.go:89] found id: ""
	I1210 07:54:34.825100  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.825107  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:34.825112  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:34.825177  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:34.850864  418823 cri.go:89] found id: ""
	I1210 07:54:34.850879  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.850896  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:34.850901  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:34.850975  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:34.875132  418823 cri.go:89] found id: ""
	I1210 07:54:34.875146  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.875154  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:34.875159  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:34.875227  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:34.899938  418823 cri.go:89] found id: ""
	I1210 07:54:34.899953  418823 logs.go:282] 0 containers: []
	W1210 07:54:34.899970  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:34.899979  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:34.899990  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:34.923898  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:34.923916  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:35.004342  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:34.994993   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.995801   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.997355   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.997871   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.999627   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:34.994993   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.995801   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.997355   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.997871   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:34.999627   17176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:35.004372  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:35.004385  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:35.076257  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:35.076279  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:35.104842  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:35.104858  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:37.672039  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:37.681946  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 07:54:37.682009  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 07:54:37.706328  418823 cri.go:89] found id: ""
	I1210 07:54:37.706342  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.706349  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 07:54:37.706354  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 07:54:37.706420  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 07:54:37.731157  418823 cri.go:89] found id: ""
	I1210 07:54:37.731171  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.731179  418823 logs.go:284] No container was found matching "etcd"
	I1210 07:54:37.731183  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 07:54:37.731243  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 07:54:37.756672  418823 cri.go:89] found id: ""
	I1210 07:54:37.756686  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.756693  418823 logs.go:284] No container was found matching "coredns"
	I1210 07:54:37.756698  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 07:54:37.756758  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 07:54:37.782323  418823 cri.go:89] found id: ""
	I1210 07:54:37.782337  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.782344  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 07:54:37.782349  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 07:54:37.782407  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 07:54:37.809398  418823 cri.go:89] found id: ""
	I1210 07:54:37.809411  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.809425  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 07:54:37.809430  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 07:54:37.809488  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 07:54:37.834279  418823 cri.go:89] found id: ""
	I1210 07:54:37.834300  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.834307  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 07:54:37.834311  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 07:54:37.834378  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 07:54:37.860329  418823 cri.go:89] found id: ""
	I1210 07:54:37.860343  418823 logs.go:282] 0 containers: []
	W1210 07:54:37.860351  418823 logs.go:284] No container was found matching "kindnet"
	I1210 07:54:37.860359  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 07:54:37.860369  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 07:54:37.933541  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 07:54:37.923454   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.924390   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.926115   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.926751   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.928659   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 07:54:37.923454   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.924390   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.926115   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.926751   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 07:54:37.928659   17271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 07:54:37.933553  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 07:54:37.933564  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 07:54:38.012971  418823 logs.go:123] Gathering logs for container status ...
	I1210 07:54:38.012996  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 07:54:38.049266  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 07:54:38.049284  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 07:54:38.124985  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 07:54:38.125006  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 07:54:40.640115  418823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 07:54:40.651783  418823 kubeadm.go:602] duration metric: took 4m3.269334188s to restartPrimaryControlPlane
	W1210 07:54:40.651842  418823 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 07:54:40.651915  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 07:54:41.061132  418823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:54:41.073851  418823 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 07:54:41.081733  418823 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:54:41.081788  418823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:54:41.089443  418823 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:54:41.089453  418823 kubeadm.go:158] found existing configuration files:
	
	I1210 07:54:41.089505  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 07:54:41.097510  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:54:41.097570  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:54:41.105078  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 07:54:41.112622  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:54:41.112682  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:54:41.120112  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 07:54:41.127831  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:54:41.127887  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:54:41.135843  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 07:54:41.143605  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:54:41.143662  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:54:41.150893  418823 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:54:41.188283  418823 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:54:41.188576  418823 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:54:41.266308  418823 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:54:41.266369  418823 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:54:41.266407  418823 kubeadm.go:319] OS: Linux
	I1210 07:54:41.266448  418823 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:54:41.266493  418823 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:54:41.266536  418823 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:54:41.266581  418823 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:54:41.266627  418823 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:54:41.266672  418823 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:54:41.266714  418823 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:54:41.266758  418823 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:54:41.266801  418823 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:54:41.327793  418823 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:54:41.327890  418823 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:54:41.327975  418823 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:54:41.335492  418823 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:54:41.340870  418823 out.go:252]   - Generating certificates and keys ...
	I1210 07:54:41.340961  418823 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:54:41.341031  418823 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:54:41.341119  418823 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:54:41.341186  418823 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:54:41.341262  418823 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:54:41.341320  418823 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:54:41.341398  418823 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:54:41.341465  418823 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:54:41.341545  418823 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:54:41.341622  418823 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:54:41.341659  418823 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:54:41.341719  418823 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:54:41.831104  418823 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:54:41.953522  418823 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:54:42.205323  418823 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:54:42.449785  418823 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:54:42.618213  418823 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:54:42.619047  418823 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:54:42.621575  418823 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:54:42.624790  418823 out.go:252]   - Booting up control plane ...
	I1210 07:54:42.624883  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:54:42.624959  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:54:42.625035  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:54:42.639751  418823 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:54:42.639880  418823 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:54:42.648702  418823 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:54:42.648797  418823 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:54:42.648841  418823 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:54:42.779710  418823 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:54:42.779857  418823 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 07:58:42.778273  418823 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000214333s
	I1210 07:58:42.778318  418823 kubeadm.go:319] 
	I1210 07:58:42.778386  418823 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 07:58:42.778418  418823 kubeadm.go:319] 	- The kubelet is not running
	I1210 07:58:42.778523  418823 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 07:58:42.778528  418823 kubeadm.go:319] 
	I1210 07:58:42.778632  418823 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 07:58:42.778679  418823 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 07:58:42.778709  418823 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 07:58:42.778712  418823 kubeadm.go:319] 
	I1210 07:58:42.783355  418823 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 07:58:42.783807  418823 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 07:58:42.783918  418823 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 07:58:42.784153  418823 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 07:58:42.784159  418823 kubeadm.go:319] 
	I1210 07:58:42.784227  418823 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 07:58:42.784352  418823 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000214333s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 07:58:42.784459  418823 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 07:58:43.198112  418823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 07:58:43.211996  418823 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 07:58:43.212056  418823 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 07:58:43.219732  418823 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 07:58:43.219740  418823 kubeadm.go:158] found existing configuration files:
	
	I1210 07:58:43.219791  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1210 07:58:43.228096  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 07:58:43.228153  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 07:58:43.235851  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1210 07:58:43.244105  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 07:58:43.244161  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 07:58:43.252172  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1210 07:58:43.259776  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 07:58:43.259838  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 07:58:43.267182  418823 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1210 07:58:43.274881  418823 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 07:58:43.274939  418823 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 07:58:43.282494  418823 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 07:58:43.323208  418823 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 07:58:43.323257  418823 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 07:58:43.392495  418823 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 07:58:43.392566  418823 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 07:58:43.392605  418823 kubeadm.go:319] OS: Linux
	I1210 07:58:43.392653  418823 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 07:58:43.392700  418823 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 07:58:43.392753  418823 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 07:58:43.392806  418823 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 07:58:43.392856  418823 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 07:58:43.392902  418823 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 07:58:43.392950  418823 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 07:58:43.392997  418823 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 07:58:43.393041  418823 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 07:58:43.459397  418823 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 07:58:43.459500  418823 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 07:58:43.459594  418823 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 07:58:43.467473  418823 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 07:58:43.472849  418823 out.go:252]   - Generating certificates and keys ...
	I1210 07:58:43.472935  418823 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 07:58:43.472999  418823 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 07:58:43.473075  418823 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 07:58:43.473135  418823 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 07:58:43.473203  418823 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 07:58:43.473256  418823 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 07:58:43.473324  418823 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 07:58:43.473385  418823 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 07:58:43.474012  418823 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 07:58:43.474414  418823 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 07:58:43.474604  418823 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 07:58:43.474667  418823 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 07:58:43.690916  418823 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 07:58:43.922489  418823 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 07:58:44.055635  418823 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 07:58:44.187430  418823 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 07:58:44.228570  418823 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 07:58:44.229295  418823 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 07:58:44.233140  418823 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 07:58:44.236201  418823 out.go:252]   - Booting up control plane ...
	I1210 07:58:44.236295  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 07:58:44.236371  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 07:58:44.236933  418823 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 07:58:44.251863  418823 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 07:58:44.251964  418823 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 07:58:44.259287  418823 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 07:58:44.259598  418823 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 07:58:44.259801  418823 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 07:58:44.391514  418823 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 07:58:44.391627  418823 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 08:02:44.389879  418823 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00019224s
	I1210 08:02:44.389912  418823 kubeadm.go:319] 
	I1210 08:02:44.389980  418823 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 08:02:44.390013  418823 kubeadm.go:319] 	- The kubelet is not running
	I1210 08:02:44.390123  418823 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 08:02:44.390155  418823 kubeadm.go:319] 
	I1210 08:02:44.390271  418823 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 08:02:44.390303  418823 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 08:02:44.390331  418823 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 08:02:44.390335  418823 kubeadm.go:319] 
	I1210 08:02:44.395328  418823 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 08:02:44.395720  418823 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 08:02:44.395823  418823 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 08:02:44.396068  418823 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 08:02:44.396072  418823 kubeadm.go:319] 
	I1210 08:02:44.396138  418823 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 08:02:44.396188  418823 kubeadm.go:403] duration metric: took 12m7.052327562s to StartCluster
	I1210 08:02:44.396219  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:02:44.396280  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:02:44.421374  418823 cri.go:89] found id: ""
	I1210 08:02:44.421389  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.421396  418823 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:02:44.421401  418823 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:02:44.421463  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:02:44.447342  418823 cri.go:89] found id: ""
	I1210 08:02:44.447356  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.447363  418823 logs.go:284] No container was found matching "etcd"
	I1210 08:02:44.447368  418823 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:02:44.447429  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:02:44.472601  418823 cri.go:89] found id: ""
	I1210 08:02:44.472614  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.472621  418823 logs.go:284] No container was found matching "coredns"
	I1210 08:02:44.472627  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:02:44.472684  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:02:44.501973  418823 cri.go:89] found id: ""
	I1210 08:02:44.501986  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.501993  418823 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:02:44.502000  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:02:44.502059  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:02:44.527997  418823 cri.go:89] found id: ""
	I1210 08:02:44.528011  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.528018  418823 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:02:44.528023  418823 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:02:44.528083  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:02:44.558353  418823 cri.go:89] found id: ""
	I1210 08:02:44.558367  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.558374  418823 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:02:44.558379  418823 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:02:44.558439  418823 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:02:44.583751  418823 cri.go:89] found id: ""
	I1210 08:02:44.583764  418823 logs.go:282] 0 containers: []
	W1210 08:02:44.583771  418823 logs.go:284] No container was found matching "kindnet"
	I1210 08:02:44.583780  418823 logs.go:123] Gathering logs for dmesg ...
	I1210 08:02:44.583792  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:02:44.598048  418823 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:02:44.598065  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:02:44.670126  418823 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 08:02:44.661736   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.662310   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.664031   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.664536   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.666240   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1210 08:02:44.661736   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.662310   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.664031   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.664536   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:44.666240   21049 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:02:44.670142  418823 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:02:44.670153  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:02:44.741133  418823 logs.go:123] Gathering logs for container status ...
	I1210 08:02:44.741153  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:02:44.768780  418823 logs.go:123] Gathering logs for kubelet ...
	I1210 08:02:44.768797  418823 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 08:02:44.836964  418823 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00019224s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 08:02:44.837011  418823 out.go:285] * 
	W1210 08:02:44.837080  418823 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00019224s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 08:02:44.837155  418823 out.go:285] * 
	W1210 08:02:44.839300  418823 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 08:02:44.844978  418823 out.go:203] 
	W1210 08:02:44.848781  418823 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00019224s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 08:02:44.848820  418823 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 08:02:44.848841  418823 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 08:02:44.852612  418823 out.go:203] 
	
	
	==> CRI-O <==
	Dec 10 07:50:35 functional-314220 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 10 07:54:41 functional-314220 crio[9896]: time="2025-12-10T07:54:41.331601646Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=62979dbb-32a0-43d5-a3b2-a98045dd82da name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:54:41 functional-314220 crio[9896]: time="2025-12-10T07:54:41.332356674Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=882162fe-73f4-4075-9551-d0a546a62bbf name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:54:41 functional-314220 crio[9896]: time="2025-12-10T07:54:41.332837779Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=07883432-643e-4682-a159-ee81c5c97128 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:54:41 functional-314220 crio[9896]: time="2025-12-10T07:54:41.333259733Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=c2e92f0d-1459-497a-8d07-d423bb265c62 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:54:41 functional-314220 crio[9896]: time="2025-12-10T07:54:41.333667081Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0e2e5041-5e30-43e4-8893-355aed834dc7 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:54:41 functional-314220 crio[9896]: time="2025-12-10T07:54:41.334042339Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=16ec4d28-0473-431c-a6c6-f756cd1ed250 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:54:41 functional-314220 crio[9896]: time="2025-12-10T07:54:41.334553221Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=945bf42d-863d-43db-9dbb-1cb7338cdf87 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.462758438Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=30754360-77fc-41d9-961a-703309105bf4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.463612109Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=ce58a9d0-ec5e-41a7-a162-73ed5f175442 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.464131886Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=af48f6b4-c6c8-458a-8d08-3443ae3e881b name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.464662517Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=4191b9f9-c176-40bc-b3bb-ec0edd3076c8 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.465135606Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=eaddf230-cd32-4499-a396-5bbd1b1cb31a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.465587147Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=300cd477-ebce-4fed-8c84-bc9781d52848 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 07:58:43 functional-314220 crio[9896]: time="2025-12-10T07:58:43.466022016Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=1367fa60-098e-4704-b6f3-b114a75d5405 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.067588181Z" level=info msg="Checking image status: kicbase/echo-server:functional-314220" id=9cb916ae-b10a-45e5-9f16-e18af053130e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.067769762Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.067811068Z" level=info msg="Image kicbase/echo-server:functional-314220 not found" id=9cb916ae-b10a-45e5-9f16-e18af053130e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.067872754Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-314220 found" id=9cb916ae-b10a-45e5-9f16-e18af053130e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.095996195Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-314220" id=b1a2f38f-ca46-4e03-a345-6e3600d518ba name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.096290073Z" level=info msg="Image docker.io/kicbase/echo-server:functional-314220 not found" id=b1a2f38f-ca46-4e03-a345-6e3600d518ba name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.09635135Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-314220 found" id=b1a2f38f-ca46-4e03-a345-6e3600d518ba name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.132021615Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-314220" id=2edafe31-0e02-4fc2-9f4f-edcaf98b26aa name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.132192218Z" level=info msg="Image localhost/kicbase/echo-server:functional-314220 not found" id=2edafe31-0e02-4fc2-9f4f-edcaf98b26aa name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:02:54 functional-314220 crio[9896]: time="2025-12-10T08:02:54.13224538Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-314220 found" id=2edafe31-0e02-4fc2-9f4f-edcaf98b26aa name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1210 08:02:55.331249   21834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:55.332031   21834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:55.333594   21834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:55.333900   21834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1210 08:02:55.335453   21834 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 05:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016194] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.497166] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034163] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.835295] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.431549] kauditd_printk_skb: 36 callbacks suppressed
	[Dec10 05:39] FS-Cache: Duplicate cookie detected
	[  +0.000728] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.000985] FS-Cache: O-cookie d=0000000000fc4794{9P.session} n=0000000060003167
	[  +0.001121] FS-Cache: O-key=[10] '34323935323137323137'
	[  +0.000772] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000947] FS-Cache: N-cookie d=0000000000fc4794{9P.session} n=00000000722b61f1
	[  +0.001080] FS-Cache: N-key=[10] '34323935323137323137'
	[Dec10 06:28] hrtimer: interrupt took 46138812 ns
	[Dec10 07:11] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:23] kauditd_printk_skb: 8 callbacks suppressed
	[Dec10 07:25] overlayfs: idmapped layers are currently not supported
	[  +0.065949] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec10 07:31] overlayfs: idmapped layers are currently not supported
	[Dec10 07:32] overlayfs: idmapped layers are currently not supported
	[Dec10 07:50] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 08:02:55 up  2:45,  0 user,  load average: 0.42, 0.25, 0.50
	Linux functional-314220 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 08:02:53 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:02:53 functional-314220 kubelet[21635]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:02:53 functional-314220 kubelet[21635]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:02:53 functional-314220 kubelet[21635]: E1210 08:02:53.219058   21635 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:02:53 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:02:53 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:02:53 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 332.
	Dec 10 08:02:53 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:02:53 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:02:53 functional-314220 kubelet[21665]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:02:53 functional-314220 kubelet[21665]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:02:53 functional-314220 kubelet[21665]: E1210 08:02:53.977065   21665 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:02:53 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:02:53 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:02:54 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 333.
	Dec 10 08:02:54 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:02:54 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:02:54 functional-314220 kubelet[21736]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:02:54 functional-314220 kubelet[21736]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:02:54 functional-314220 kubelet[21736]: E1210 08:02:54.753384   21736 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:02:54 functional-314220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:02:54 functional-314220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:02:55 functional-314220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 334.
	Dec 10 08:02:55 functional-314220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:02:55 functional-314220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-314220 -n functional-314220: exit status 2 (445.138853ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-314220" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (3.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-314220 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-314220 create deployment hello-node --image kicbase/echo-server: exit status 1 (78.316059ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-314220 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-314220 service list: exit status 103 (333.259922ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-314220 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-314220"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-314220 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-314220 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-314220\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-314220 service list -o json: exit status 103 (322.254655ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-314220 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-314220"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-314220 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-314220 service --namespace=default --https --url hello-node: exit status 103 (759.302912ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-314220 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-314220"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-314220 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-314220 service hello-node --url --format={{.IP}}: exit status 103 (386.267284ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-314220 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-314220"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-314220 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-314220 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-314220\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-314220 service hello-node --url: exit status 103 (375.097648ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-314220 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-314220"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-314220 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-314220 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-314220"
functional_test.go:1579: failed to parse "* The control-plane node functional-314220 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-314220\"": parse "* The control-plane node functional-314220 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-314220\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-314220 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-314220 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1210 08:03:01.158210  433799 out.go:360] Setting OutFile to fd 1 ...
I1210 08:03:01.158388  433799 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 08:03:01.158414  433799 out.go:374] Setting ErrFile to fd 2...
I1210 08:03:01.158433  433799 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 08:03:01.158826  433799 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
I1210 08:03:01.159223  433799 mustload.go:66] Loading cluster: functional-314220
I1210 08:03:01.159948  433799 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 08:03:01.160672  433799 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
I1210 08:03:01.185309  433799 host.go:66] Checking if "functional-314220" exists ...
I1210 08:03:01.185644  433799 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1210 08:03:01.320514  433799 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 08:03:01.298428974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1210 08:03:01.320651  433799 api_server.go:166] Checking apiserver status ...
I1210 08:03:01.320718  433799 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1210 08:03:01.320768  433799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
I1210 08:03:01.355133  433799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
W1210 08:03:01.499917  433799 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1210 08:03:01.503165  433799 out.go:179] * The control-plane node functional-314220 apiserver is not running: (state=Stopped)
I1210 08:03:01.506101  433799 out.go:179]   To start a cluster, run: "minikube start -p functional-314220"

                                                
                                                
stdout: * The control-plane node functional-314220 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-314220"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-314220 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 433798: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-314220 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-314220 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-314220 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-314220 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-314220 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-314220 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-314220 apply -f testdata/testsvc.yaml: exit status 1 (108.983716ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-314220 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (112.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.108.159.154": Temporary Error: Get "http://10.108.159.154": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-314220 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-314220 get svc nginx-svc: exit status 1 (75.509745ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-314220 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (112.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-314220 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo615354431/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765353900941209775" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo615354431/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765353900941209775" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo615354431/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765353900941209775" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo615354431/001/test-1765353900941209775
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-314220 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (346.249173ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 08:05:01.287721  378528 retry.go:31] will retry after 291.604347ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 10 08:05 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 10 08:05 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 10 08:05 test-1765353900941209775
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh cat /mount-9p/test-1765353900941209775
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-314220 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-314220 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (58.96602ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-314220 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-314220 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (268.3721ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=43247)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 10 08:05 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 10 08:05 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 10 08:05 test-1765353900941209775
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-314220 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-314220 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo615354431/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-314220 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo615354431/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo615354431/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:43247
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo615354431/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-314220 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo615354431/001:/mount-9p --alsologtostderr -v=1] stderr:
I1210 08:05:01.007857  436265 out.go:360] Setting OutFile to fd 1 ...
I1210 08:05:01.008086  436265 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 08:05:01.008107  436265 out.go:374] Setting ErrFile to fd 2...
I1210 08:05:01.008122  436265 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 08:05:01.008472  436265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
I1210 08:05:01.008753  436265 mustload.go:66] Loading cluster: functional-314220
I1210 08:05:01.009146  436265 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 08:05:01.009767  436265 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
I1210 08:05:01.029277  436265 host.go:66] Checking if "functional-314220" exists ...
I1210 08:05:01.029625  436265 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1210 08:05:01.133028  436265 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 08:05:01.117770305 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1210 08:05:01.133196  436265 cli_runner.go:164] Run: docker network inspect functional-314220 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1210 08:05:01.158419  436265 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo615354431/001 into VM as /mount-9p ...
I1210 08:05:01.161546  436265 out.go:179]   - Mount type:   9p
I1210 08:05:01.164419  436265 out.go:179]   - User ID:      docker
I1210 08:05:01.167356  436265 out.go:179]   - Group ID:     docker
I1210 08:05:01.170195  436265 out.go:179]   - Version:      9p2000.L
I1210 08:05:01.173203  436265 out.go:179]   - Message Size: 262144
I1210 08:05:01.176217  436265 out.go:179]   - Options:      map[]
I1210 08:05:01.179194  436265 out.go:179]   - Bind Address: 192.168.49.1:43247
I1210 08:05:01.182111  436265 out.go:179] * Userspace file server: 
I1210 08:05:01.182404  436265 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1210 08:05:01.182508  436265 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
I1210 08:05:01.207311  436265 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
I1210 08:05:01.310212  436265 mount.go:180] unmount for /mount-9p ran successfully
I1210 08:05:01.310245  436265 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1210 08:05:01.318976  436265 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=43247,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1210 08:05:01.330186  436265 main.go:127] stdlog: ufs.go:141 connected
I1210 08:05:01.330351  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tversion tag 65535 msize 262144 version '9P2000.L'
I1210 08:05:01.330400  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rversion tag 65535 msize 262144 version '9P2000'
I1210 08:05:01.330624  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1210 08:05:01.330696  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rattach tag 0 aqid (ed6f13 74acb87 'd')
I1210 08:05:01.330985  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tstat tag 0 fid 0
I1210 08:05:01.331147  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6f13 74acb87 'd') m d775 at 0 mt 1765353900 l 4096 t 0 d 0 ext )
I1210 08:05:01.335835  436265 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/.mount-process: {Name:mk59439a1f3902cd8d6643dd71f3769bd067fad1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 08:05:01.336037  436265 mount.go:105] mount successful: ""
I1210 08:05:01.339536  436265 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo615354431/001 to /mount-9p
I1210 08:05:01.342557  436265 out.go:203] 
I1210 08:05:01.345355  436265 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1210 08:05:02.127450  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tstat tag 0 fid 0
I1210 08:05:02.127533  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6f13 74acb87 'd') m d775 at 0 mt 1765353900 l 4096 t 0 d 0 ext )
I1210 08:05:02.127898  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Twalk tag 0 fid 0 newfid 1 
I1210 08:05:02.127936  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rwalk tag 0 
I1210 08:05:02.128050  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Topen tag 0 fid 1 mode 0
I1210 08:05:02.128099  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Ropen tag 0 qid (ed6f13 74acb87 'd') iounit 0
I1210 08:05:02.128205  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tstat tag 0 fid 0
I1210 08:05:02.128249  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6f13 74acb87 'd') m d775 at 0 mt 1765353900 l 4096 t 0 d 0 ext )
I1210 08:05:02.128394  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tread tag 0 fid 1 offset 0 count 262120
I1210 08:05:02.128502  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rread tag 0 count 258
I1210 08:05:02.128609  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tread tag 0 fid 1 offset 258 count 261862
I1210 08:05:02.128637  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rread tag 0 count 0
I1210 08:05:02.128747  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tread tag 0 fid 1 offset 258 count 262120
I1210 08:05:02.128774  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rread tag 0 count 0
I1210 08:05:02.128898  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1210 08:05:02.128933  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rwalk tag 0 (ed6f14 74acb87 '') 
I1210 08:05:02.129068  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tstat tag 0 fid 2
I1210 08:05:02.129106  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6f14 74acb87 '') m 644 at 0 mt 1765353900 l 24 t 0 d 0 ext )
I1210 08:05:02.129233  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tstat tag 0 fid 2
I1210 08:05:02.129268  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6f14 74acb87 '') m 644 at 0 mt 1765353900 l 24 t 0 d 0 ext )
I1210 08:05:02.129382  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tclunk tag 0 fid 2
I1210 08:05:02.129408  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rclunk tag 0
I1210 08:05:02.129527  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Twalk tag 0 fid 0 newfid 2 0:'test-1765353900941209775' 
I1210 08:05:02.129558  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rwalk tag 0 (ed6f16 74acb87 '') 
I1210 08:05:02.129655  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tstat tag 0 fid 2
I1210 08:05:02.129687  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rstat tag 0 st ('test-1765353900941209775' 'jenkins' 'jenkins' '' q (ed6f16 74acb87 '') m 644 at 0 mt 1765353900 l 24 t 0 d 0 ext )
I1210 08:05:02.129809  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tstat tag 0 fid 2
I1210 08:05:02.129841  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rstat tag 0 st ('test-1765353900941209775' 'jenkins' 'jenkins' '' q (ed6f16 74acb87 '') m 644 at 0 mt 1765353900 l 24 t 0 d 0 ext )
I1210 08:05:02.129957  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tclunk tag 0 fid 2
I1210 08:05:02.129981  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rclunk tag 0
I1210 08:05:02.130115  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1210 08:05:02.130155  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rwalk tag 0 (ed6f15 74acb87 '') 
I1210 08:05:02.130259  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tstat tag 0 fid 2
I1210 08:05:02.130294  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6f15 74acb87 '') m 644 at 0 mt 1765353900 l 24 t 0 d 0 ext )
I1210 08:05:02.130409  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tstat tag 0 fid 2
I1210 08:05:02.130445  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6f15 74acb87 '') m 644 at 0 mt 1765353900 l 24 t 0 d 0 ext )
I1210 08:05:02.130563  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tclunk tag 0 fid 2
I1210 08:05:02.130591  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rclunk tag 0
I1210 08:05:02.130699  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tread tag 0 fid 1 offset 258 count 262120
I1210 08:05:02.130729  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rread tag 0 count 0
I1210 08:05:02.130859  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tclunk tag 0 fid 1
I1210 08:05:02.130891  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rclunk tag 0
I1210 08:05:02.419282  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Twalk tag 0 fid 0 newfid 1 0:'test-1765353900941209775' 
I1210 08:05:02.419358  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rwalk tag 0 (ed6f16 74acb87 '') 
I1210 08:05:02.419541  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tstat tag 0 fid 1
I1210 08:05:02.419589  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rstat tag 0 st ('test-1765353900941209775' 'jenkins' 'jenkins' '' q (ed6f16 74acb87 '') m 644 at 0 mt 1765353900 l 24 t 0 d 0 ext )
I1210 08:05:02.419751  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Twalk tag 0 fid 1 newfid 2 
I1210 08:05:02.419784  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rwalk tag 0 
I1210 08:05:02.419904  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Topen tag 0 fid 2 mode 0
I1210 08:05:02.419953  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Ropen tag 0 qid (ed6f16 74acb87 '') iounit 0
I1210 08:05:02.420100  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tstat tag 0 fid 1
I1210 08:05:02.420139  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rstat tag 0 st ('test-1765353900941209775' 'jenkins' 'jenkins' '' q (ed6f16 74acb87 '') m 644 at 0 mt 1765353900 l 24 t 0 d 0 ext )
I1210 08:05:02.420276  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tread tag 0 fid 2 offset 0 count 262120
I1210 08:05:02.420323  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rread tag 0 count 24
I1210 08:05:02.420447  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tread tag 0 fid 2 offset 24 count 262120
I1210 08:05:02.420474  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rread tag 0 count 0
I1210 08:05:02.420662  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tread tag 0 fid 2 offset 24 count 262120
I1210 08:05:02.420697  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rread tag 0 count 0
I1210 08:05:02.420931  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tclunk tag 0 fid 2
I1210 08:05:02.420984  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rclunk tag 0
I1210 08:05:02.421200  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tclunk tag 0 fid 1
I1210 08:05:02.421229  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rclunk tag 0
I1210 08:05:02.750595  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tstat tag 0 fid 0
I1210 08:05:02.750675  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6f13 74acb87 'd') m d775 at 0 mt 1765353900 l 4096 t 0 d 0 ext )
I1210 08:05:02.751062  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Twalk tag 0 fid 0 newfid 1 
I1210 08:05:02.751105  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rwalk tag 0 
I1210 08:05:02.751271  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Topen tag 0 fid 1 mode 0
I1210 08:05:02.751323  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Ropen tag 0 qid (ed6f13 74acb87 'd') iounit 0
I1210 08:05:02.751471  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tstat tag 0 fid 0
I1210 08:05:02.751506  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (ed6f13 74acb87 'd') m d775 at 0 mt 1765353900 l 4096 t 0 d 0 ext )
I1210 08:05:02.751679  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tread tag 0 fid 1 offset 0 count 262120
I1210 08:05:02.751787  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rread tag 0 count 258
I1210 08:05:02.751927  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tread tag 0 fid 1 offset 258 count 261862
I1210 08:05:02.751959  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rread tag 0 count 0
I1210 08:05:02.752094  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tread tag 0 fid 1 offset 258 count 262120
I1210 08:05:02.752121  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rread tag 0 count 0
I1210 08:05:02.752282  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1210 08:05:02.752314  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rwalk tag 0 (ed6f14 74acb87 '') 
I1210 08:05:02.752438  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tstat tag 0 fid 2
I1210 08:05:02.752474  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6f14 74acb87 '') m 644 at 0 mt 1765353900 l 24 t 0 d 0 ext )
I1210 08:05:02.752602  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tstat tag 0 fid 2
I1210 08:05:02.752642  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (ed6f14 74acb87 '') m 644 at 0 mt 1765353900 l 24 t 0 d 0 ext )
I1210 08:05:02.752802  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tclunk tag 0 fid 2
I1210 08:05:02.752846  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rclunk tag 0
I1210 08:05:02.752985  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Twalk tag 0 fid 0 newfid 2 0:'test-1765353900941209775' 
I1210 08:05:02.753026  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rwalk tag 0 (ed6f16 74acb87 '') 
I1210 08:05:02.753149  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tstat tag 0 fid 2
I1210 08:05:02.753193  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rstat tag 0 st ('test-1765353900941209775' 'jenkins' 'jenkins' '' q (ed6f16 74acb87 '') m 644 at 0 mt 1765353900 l 24 t 0 d 0 ext )
I1210 08:05:02.753317  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tstat tag 0 fid 2
I1210 08:05:02.753352  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rstat tag 0 st ('test-1765353900941209775' 'jenkins' 'jenkins' '' q (ed6f16 74acb87 '') m 644 at 0 mt 1765353900 l 24 t 0 d 0 ext )
I1210 08:05:02.753485  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tclunk tag 0 fid 2
I1210 08:05:02.753514  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rclunk tag 0
I1210 08:05:02.753653  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1210 08:05:02.753697  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rwalk tag 0 (ed6f15 74acb87 '') 
I1210 08:05:02.753822  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tstat tag 0 fid 2
I1210 08:05:02.753858  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6f15 74acb87 '') m 644 at 0 mt 1765353900 l 24 t 0 d 0 ext )
I1210 08:05:02.753984  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tstat tag 0 fid 2
I1210 08:05:02.754019  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (ed6f15 74acb87 '') m 644 at 0 mt 1765353900 l 24 t 0 d 0 ext )
I1210 08:05:02.754144  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tclunk tag 0 fid 2
I1210 08:05:02.754170  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rclunk tag 0
I1210 08:05:02.754293  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tread tag 0 fid 1 offset 258 count 262120
I1210 08:05:02.754338  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rread tag 0 count 0
I1210 08:05:02.754478  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tclunk tag 0 fid 1
I1210 08:05:02.754514  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rclunk tag 0
I1210 08:05:02.755963  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1210 08:05:02.756046  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rerror tag 0 ename 'file not found' ecode 0
I1210 08:05:03.024790  436265 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:45198 Tclunk tag 0 fid 0
I1210 08:05:03.024848  436265 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:45198 Rclunk tag 0
I1210 08:05:03.025901  436265 main.go:127] stdlog: ufs.go:147 disconnected
I1210 08:05:03.048450  436265 out.go:179] * Unmounting /mount-9p ...
I1210 08:05:03.051459  436265 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1210 08:05:03.059209  436265 mount.go:180] unmount for /mount-9p ran successfully
I1210 08:05:03.059340  436265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/.mount-process: {Name:mk59439a1f3902cd8d6643dd71f3769bd067fad1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 08:05:03.062562  436265 out.go:203] 
W1210 08:05:03.065501  436265 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1210 08:05:03.068357  436265 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.21s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-878886 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-878886 --output=json --user=testUser: exit status 80 (1.716187471s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"59366091-1b6a-46cb-b8af-b3e8427602fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-878886 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"770c0a3e-255a-496e-8c8d-bf58ff4f0d00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-10T08:21:15Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"5c58b542-0d6d-4c69-aa49-4cd9f02aa358","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-878886 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.72s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.92s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-878886 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-878886 --output=json --user=testUser: exit status 80 (1.924422929s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ef67bb11-ec38-4b3e-b448-2f6a5d8ecd36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-878886 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"64eff570-529b-4726-ad66-85b3d05bfc62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-10T08:21:17Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"9fd8d3f6-2db8-49dc-9ba9-8cf7a12aba5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-878886 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (780.92s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-470056 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1210 08:40:04.884349  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-470056 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.478778167s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-470056
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-470056: (1.553276588s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-470056 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-470056 status --format={{.Host}}: exit status 7 (137.122172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-470056 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-470056 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 109 (12m19.976288812s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-470056] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-470056" primary control-plane node in "kubernetes-upgrade-470056" cluster
	* Pulling base image v0.0.48-1765319469-22089 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 08:40:30.040162  559515 out.go:360] Setting OutFile to fd 1 ...
	I1210 08:40:30.040316  559515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:40:30.040328  559515 out.go:374] Setting ErrFile to fd 2...
	I1210 08:40:30.040334  559515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:40:30.040721  559515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 08:40:30.041220  559515 out.go:368] Setting JSON to false
	I1210 08:40:30.042306  559515 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12180,"bootTime":1765343850,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 08:40:30.042423  559515 start.go:143] virtualization:  
	I1210 08:40:30.051546  559515 out.go:179] * [kubernetes-upgrade-470056] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 08:40:30.054446  559515 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 08:40:30.054564  559515 notify.go:221] Checking for updates...
	I1210 08:40:30.060171  559515 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 08:40:30.062908  559515 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 08:40:30.065796  559515 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 08:40:30.068574  559515 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 08:40:30.071374  559515 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 08:40:30.074806  559515 config.go:182] Loaded profile config "kubernetes-upgrade-470056": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 08:40:30.075505  559515 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 08:40:30.145265  559515 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 08:40:30.145438  559515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 08:40:30.244737  559515 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 08:40:30.2317479 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 08:40:30.244848  559515 docker.go:319] overlay module found
	I1210 08:40:30.247847  559515 out.go:179] * Using the docker driver based on existing profile
	I1210 08:40:30.250588  559515 start.go:309] selected driver: docker
	I1210 08:40:30.250608  559515 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-470056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-470056 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 08:40:30.250732  559515 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 08:40:30.251479  559515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 08:40:30.368120  559515 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 08:40:30.354165413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 08:40:30.368455  559515 cni.go:84] Creating CNI manager for ""
	I1210 08:40:30.368523  559515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 08:40:30.368566  559515 start.go:353] cluster config:
	{Name:kubernetes-upgrade-470056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-470056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 08:40:30.371722  559515 out.go:179] * Starting "kubernetes-upgrade-470056" primary control-plane node in "kubernetes-upgrade-470056" cluster
	I1210 08:40:30.374452  559515 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 08:40:30.377369  559515 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 08:40:30.380261  559515 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 08:40:30.380315  559515 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1210 08:40:30.380330  559515 cache.go:65] Caching tarball of preloaded images
	I1210 08:40:30.380421  559515 preload.go:238] Found /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1210 08:40:30.380436  559515 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 08:40:30.380553  559515 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/config.json ...
	I1210 08:40:30.380752  559515 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 08:40:30.410489  559515 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 08:40:30.410515  559515 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 08:40:30.410530  559515 cache.go:243] Successfully downloaded all kic artifacts
	I1210 08:40:30.410565  559515 start.go:360] acquireMachinesLock for kubernetes-upgrade-470056: {Name:mk76103b2f0fae4fa69e0d1baba03cd5feffd6fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 08:40:30.410624  559515 start.go:364] duration metric: took 35.045µs to acquireMachinesLock for "kubernetes-upgrade-470056"
	I1210 08:40:30.410645  559515 start.go:96] Skipping create...Using existing machine configuration
	I1210 08:40:30.410655  559515 fix.go:54] fixHost starting: 
	I1210 08:40:30.410917  559515 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-470056 --format={{.State.Status}}
	I1210 08:40:30.440722  559515 fix.go:112] recreateIfNeeded on kubernetes-upgrade-470056: state=Stopped err=<nil>
	W1210 08:40:30.440757  559515 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 08:40:30.443984  559515 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-470056" ...
	I1210 08:40:30.444088  559515 cli_runner.go:164] Run: docker start kubernetes-upgrade-470056
	I1210 08:40:30.811799  559515 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-470056 --format={{.State.Status}}
	I1210 08:40:30.848700  559515 kic.go:430] container "kubernetes-upgrade-470056" state is running.
	I1210 08:40:30.849107  559515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-470056
	I1210 08:40:30.887232  559515 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/config.json ...
	I1210 08:40:30.887464  559515 machine.go:94] provisionDockerMachine start ...
	I1210 08:40:30.887537  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:30.917139  559515 main.go:143] libmachine: Using SSH client type: native
	I1210 08:40:30.917487  559515 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1210 08:40:30.917497  559515 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 08:40:30.918619  559515 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 08:40:34.090535  559515 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-470056
	
	I1210 08:40:34.090615  559515 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-470056"
	I1210 08:40:34.090716  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:34.128340  559515 main.go:143] libmachine: Using SSH client type: native
	I1210 08:40:34.128657  559515 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1210 08:40:34.128673  559515 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-470056 && echo "kubernetes-upgrade-470056" | sudo tee /etc/hostname
	I1210 08:40:34.297852  559515 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-470056
	
	I1210 08:40:34.298008  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:34.376678  559515 main.go:143] libmachine: Using SSH client type: native
	I1210 08:40:34.376987  559515 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1210 08:40:34.377003  559515 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-470056' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-470056/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-470056' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 08:40:34.547600  559515 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 08:40:34.547679  559515 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-376671/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-376671/.minikube}
	I1210 08:40:34.547721  559515 ubuntu.go:190] setting up certificates
	I1210 08:40:34.547762  559515 provision.go:84] configureAuth start
	I1210 08:40:34.547888  559515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-470056
	I1210 08:40:34.573984  559515 provision.go:143] copyHostCerts
	I1210 08:40:34.574057  559515 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem, removing ...
	I1210 08:40:34.574066  559515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem
	I1210 08:40:34.574142  559515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem (1123 bytes)
	I1210 08:40:34.574236  559515 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem, removing ...
	I1210 08:40:34.574241  559515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem
	I1210 08:40:34.574266  559515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem (1675 bytes)
	I1210 08:40:34.574315  559515 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem, removing ...
	I1210 08:40:34.574320  559515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem
	I1210 08:40:34.574343  559515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem (1082 bytes)
	I1210 08:40:34.574398  559515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-470056 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-470056 localhost minikube]
	I1210 08:40:34.686160  559515 provision.go:177] copyRemoteCerts
	I1210 08:40:34.686273  559515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 08:40:34.686347  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:34.722343  559515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/kubernetes-upgrade-470056/id_rsa Username:docker}
	I1210 08:40:34.832542  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 08:40:34.861570  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1210 08:40:34.896052  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 08:40:34.921367  559515 provision.go:87] duration metric: took 373.563126ms to configureAuth
	I1210 08:40:34.921450  559515 ubuntu.go:206] setting minikube options for container-runtime
	I1210 08:40:34.921682  559515 config.go:182] Loaded profile config "kubernetes-upgrade-470056": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 08:40:34.921855  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:34.946408  559515 main.go:143] libmachine: Using SSH client type: native
	I1210 08:40:34.946719  559515 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1210 08:40:34.946736  559515 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 08:40:35.353912  559515 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 08:40:35.353936  559515 machine.go:97] duration metric: took 4.466461328s to provisionDockerMachine
	I1210 08:40:35.353947  559515 start.go:293] postStartSetup for "kubernetes-upgrade-470056" (driver="docker")
	I1210 08:40:35.353983  559515 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 08:40:35.354100  559515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 08:40:35.354167  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:35.374974  559515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/kubernetes-upgrade-470056/id_rsa Username:docker}
	I1210 08:40:35.493386  559515 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 08:40:35.497233  559515 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 08:40:35.497304  559515 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 08:40:35.497336  559515 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/addons for local assets ...
	I1210 08:40:35.497408  559515 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/files for local assets ...
	I1210 08:40:35.497512  559515 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> 3785282.pem in /etc/ssl/certs
	I1210 08:40:35.497643  559515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 08:40:35.510121  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 08:40:35.541645  559515 start.go:296] duration metric: took 187.683107ms for postStartSetup
	I1210 08:40:35.541805  559515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 08:40:35.541878  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:35.569359  559515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/kubernetes-upgrade-470056/id_rsa Username:docker}
	I1210 08:40:35.679440  559515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 08:40:35.687359  559515 fix.go:56] duration metric: took 5.276695711s for fixHost
	I1210 08:40:35.687391  559515 start.go:83] releasing machines lock for "kubernetes-upgrade-470056", held for 5.276755272s
	I1210 08:40:35.687546  559515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-470056
	I1210 08:40:35.715293  559515 ssh_runner.go:195] Run: cat /version.json
	I1210 08:40:35.715345  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:35.715591  559515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 08:40:35.715643  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:35.747155  559515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/kubernetes-upgrade-470056/id_rsa Username:docker}
	I1210 08:40:35.760447  559515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/kubernetes-upgrade-470056/id_rsa Username:docker}
	I1210 08:40:35.875652  559515 ssh_runner.go:195] Run: systemctl --version
	I1210 08:40:35.992980  559515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 08:40:36.060420  559515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 08:40:36.068629  559515 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 08:40:36.068701  559515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 08:40:36.080402  559515 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 08:40:36.080426  559515 start.go:496] detecting cgroup driver to use...
	I1210 08:40:36.080457  559515 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 08:40:36.080505  559515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 08:40:36.097918  559515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 08:40:36.113386  559515 docker.go:218] disabling cri-docker service (if available) ...
	I1210 08:40:36.113506  559515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 08:40:36.140941  559515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 08:40:36.164336  559515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 08:40:36.341219  559515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 08:40:36.510314  559515 docker.go:234] disabling docker service ...
	I1210 08:40:36.510436  559515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 08:40:36.544014  559515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 08:40:36.565058  559515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 08:40:36.755122  559515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 08:40:36.954246  559515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 08:40:36.973758  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 08:40:36.998372  559515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 08:40:36.998450  559515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:40:37.011466  559515 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 08:40:37.011569  559515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:40:37.024093  559515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:40:37.038124  559515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:40:37.050180  559515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 08:40:37.059304  559515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:40:37.069187  559515 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:40:37.078300  559515 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:40:37.088529  559515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 08:40:37.097305  559515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 08:40:37.105542  559515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 08:40:37.219171  559515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 08:40:37.383269  559515 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 08:40:37.383392  559515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 08:40:37.387716  559515 start.go:564] Will wait 60s for crictl version
	I1210 08:40:37.387825  559515 ssh_runner.go:195] Run: which crictl
	I1210 08:40:37.392809  559515 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 08:40:37.427720  559515 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 08:40:37.427849  559515 ssh_runner.go:195] Run: crio --version
	I1210 08:40:37.463991  559515 ssh_runner.go:195] Run: crio --version
	I1210 08:40:37.501980  559515 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 08:40:37.505579  559515 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-470056 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 08:40:37.527286  559515 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 08:40:37.531117  559515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 08:40:37.540886  559515 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-470056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-470056 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 08:40:37.541005  559515 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 08:40:37.541068  559515 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 08:40:37.575189  559515 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1210 08:40:37.575294  559515 ssh_runner.go:195] Run: which lz4
	I1210 08:40:37.579111  559515 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 08:40:37.583005  559515 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 08:40:37.583068  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (306100841 bytes)
	I1210 08:40:39.230262  559515 crio.go:462] duration metric: took 1.651193004s to copy over tarball
	I1210 08:40:39.230351  559515 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 08:40:41.539329  559515 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.308951206s)
	I1210 08:40:41.539353  559515 crio.go:469] duration metric: took 2.309048692s to extract the tarball
	I1210 08:40:41.539360  559515 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 08:40:41.680093  559515 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 08:40:41.716512  559515 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 08:40:41.716537  559515 cache_images.go:86] Images are preloaded, skipping loading
	I1210 08:40:41.716551  559515 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1210 08:40:41.716645  559515 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-470056 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-470056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 08:40:41.716748  559515 ssh_runner.go:195] Run: crio config
	I1210 08:40:41.794006  559515 cni.go:84] Creating CNI manager for ""
	I1210 08:40:41.794030  559515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 08:40:41.794054  559515 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 08:40:41.794078  559515 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-470056 NodeName:kubernetes-upgrade-470056 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 08:40:41.794211  559515 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-470056"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 08:40:41.794290  559515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 08:40:41.803239  559515 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 08:40:41.803363  559515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 08:40:41.810785  559515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1210 08:40:41.824511  559515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 08:40:41.845081  559515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1210 08:40:41.871774  559515 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 08:40:41.875920  559515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 08:40:41.888908  559515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 08:40:42.014653  559515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 08:40:42.034732  559515 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056 for IP: 192.168.85.2
	I1210 08:40:42.034805  559515 certs.go:195] generating shared ca certs ...
	I1210 08:40:42.034838  559515 certs.go:227] acquiring lock for ca certs: {Name:mk8f0c12407c084bd20b81428ac0dabca9a8cbd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:40:42.035067  559515 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key
	I1210 08:40:42.035169  559515 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key
	I1210 08:40:42.035207  559515 certs.go:257] generating profile certs ...
	I1210 08:40:42.035342  559515 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/client.key
	I1210 08:40:42.035478  559515 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/apiserver.key.45c47546
	I1210 08:40:42.035578  559515 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/proxy-client.key
	I1210 08:40:42.035748  559515 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem (1338 bytes)
	W1210 08:40:42.035825  559515 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528_empty.pem, impossibly tiny 0 bytes
	I1210 08:40:42.035863  559515 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 08:40:42.035926  559515 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem (1082 bytes)
	I1210 08:40:42.035986  559515 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem (1123 bytes)
	I1210 08:40:42.036043  559515 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem (1675 bytes)
	I1210 08:40:42.036135  559515 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 08:40:42.036922  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 08:40:42.079780  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 08:40:42.128421  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 08:40:42.179894  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 08:40:42.203313  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1210 08:40:42.226109  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 08:40:42.248667  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 08:40:42.271268  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 08:40:42.292926  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem --> /usr/share/ca-certificates/378528.pem (1338 bytes)
	I1210 08:40:42.313701  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /usr/share/ca-certificates/3785282.pem (1708 bytes)
	I1210 08:40:42.334504  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 08:40:42.354555  559515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 08:40:42.368724  559515 ssh_runner.go:195] Run: openssl version
	I1210 08:40:42.377006  559515 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3785282.pem
	I1210 08:40:42.385181  559515 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3785282.pem /etc/ssl/certs/3785282.pem
	I1210 08:40:42.394792  559515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3785282.pem
	I1210 08:40:42.398655  559515 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 07:35 /usr/share/ca-certificates/3785282.pem
	I1210 08:40:42.398722  559515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3785282.pem
	I1210 08:40:42.439750  559515 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 08:40:42.447349  559515 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:40:42.454987  559515 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 08:40:42.464323  559515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:40:42.468227  559515 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 07:25 /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:40:42.468337  559515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:40:42.511301  559515 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 08:40:42.519128  559515 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/378528.pem
	I1210 08:40:42.527862  559515 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/378528.pem /etc/ssl/certs/378528.pem
	I1210 08:40:42.535897  559515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378528.pem
	I1210 08:40:42.539729  559515 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 07:35 /usr/share/ca-certificates/378528.pem
	I1210 08:40:42.539837  559515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378528.pem
	I1210 08:40:42.580647  559515 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 08:40:42.588135  559515 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 08:40:42.591981  559515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 08:40:42.634050  559515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 08:40:42.676821  559515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 08:40:42.719106  559515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 08:40:42.761009  559515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 08:40:42.802912  559515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 08:40:42.845199  559515 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-470056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-470056 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 08:40:42.845325  559515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 08:40:42.845407  559515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 08:40:42.919731  559515 cri.go:89] found id: ""
	I1210 08:40:42.919869  559515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 08:40:42.939688  559515 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 08:40:42.939759  559515 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 08:40:42.939836  559515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 08:40:42.951898  559515 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 08:40:42.952491  559515 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-470056" does not appear in /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 08:40:42.952776  559515 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-376671/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-470056" cluster setting kubeconfig missing "kubernetes-upgrade-470056" context setting]
	I1210 08:40:42.953270  559515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/kubeconfig: {Name:mk62f1b4a63164643ed31a5211abdadfc49db863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:40:42.953988  559515 kapi.go:59] client config for kubernetes-upgrade-470056: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/client.key", CAFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 08:40:42.954742  559515 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 08:40:42.954787  559515 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 08:40:42.954808  559515 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 08:40:42.954828  559515 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 08:40:42.954858  559515 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 08:40:42.955197  559515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 08:40:42.969776  559515 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 08:40:09.943357327 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 08:40:41.863683282 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.85.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-470056"
	   kubeletExtraArgs:
	-    node-ip: 192.168.85.2
	+    - name: "node-ip"
	+      value: "192.168.85.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1210 08:40:42.969838  559515 kubeadm.go:1161] stopping kube-system containers ...
	I1210 08:40:42.969864  559515 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 08:40:42.969937  559515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 08:40:43.022682  559515 cri.go:89] found id: ""
	I1210 08:40:43.022839  559515 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 08:40:43.042594  559515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 08:40:43.051165  559515 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Dec 10 08:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 10 08:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 10 08:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 10 08:40 /etc/kubernetes/scheduler.conf
	
	I1210 08:40:43.051260  559515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 08:40:43.060495  559515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 08:40:43.070827  559515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 08:40:43.079515  559515 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 08:40:43.079584  559515 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 08:40:43.087808  559515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 08:40:43.096558  559515 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 08:40:43.096678  559515 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 08:40:43.104568  559515 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 08:40:43.116430  559515 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 08:40:43.166741  559515 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 08:40:44.689612  559515 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.52278604s)
	I1210 08:40:44.689688  559515 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 08:40:44.978902  559515 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 08:40:45.082435  559515 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 08:40:45.147975  559515 api_server.go:52] waiting for apiserver process to appear ...
	I1210 08:40:45.148082  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:45.649017  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:46.148143  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:46.648140  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:47.148846  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:47.648362  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:48.148743  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:48.648334  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:49.148359  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:49.648951  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:50.148143  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:50.648541  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:51.148583  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:51.648229  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:52.148744  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:52.648197  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:53.149056  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:53.648956  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:54.148648  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:54.648774  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:55.148903  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:55.648213  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:56.148170  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:56.648907  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:57.148214  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:57.648216  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:58.148878  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:58.648876  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:59.148770  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:59.648238  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:00.155905  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:00.648950  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:01.148330  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:01.648175  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:02.148271  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:02.649191  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:03.148697  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:03.648594  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:04.148754  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:04.648142  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:05.148812  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:05.648224  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:06.148226  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:06.648233  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:07.148194  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:07.648608  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:08.148775  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:08.648580  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:09.148232  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:09.648551  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:10.148306  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:10.649070  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:11.149082  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:11.648994  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:12.148740  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:12.648627  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:13.148721  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:13.648237  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:14.148247  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:14.648924  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:15.148777  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:15.648207  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:16.148240  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:16.648859  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:17.148215  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:17.648247  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:18.148914  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:18.648187  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:19.148172  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:19.648420  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:20.149133  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:20.648237  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:21.148326  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:21.648641  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:22.149126  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:22.649054  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:23.148468  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:23.648203  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:24.148676  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:24.649079  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:25.148433  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:25.648336  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:26.148230  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:26.649031  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:27.148156  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:27.648958  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:28.148797  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:28.648319  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:29.148945  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:29.648778  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:30.148651  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:30.648498  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:31.148804  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:31.648929  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:32.148929  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:32.648195  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:33.148134  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:33.648994  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:34.148428  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:34.649027  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:35.148893  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:35.649010  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:36.148177  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:36.648962  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:37.148192  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:37.649052  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:38.148183  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:38.648908  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:39.149180  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:39.648186  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:40.148705  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:40.648294  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:41.148209  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:41.648140  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:42.149944  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:42.649060  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:43.148153  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:43.648199  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:44.149022  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:44.648126  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:45.148953  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:41:45.149053  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:41:45.206204  559515 cri.go:89] found id: ""
	I1210 08:41:45.206229  559515 logs.go:282] 0 containers: []
	W1210 08:41:45.206239  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:41:45.206245  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:41:45.206314  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:41:45.264544  559515 cri.go:89] found id: ""
	I1210 08:41:45.264575  559515 logs.go:282] 0 containers: []
	W1210 08:41:45.264585  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:41:45.264598  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:41:45.264694  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:41:45.329454  559515 cri.go:89] found id: ""
	I1210 08:41:45.329477  559515 logs.go:282] 0 containers: []
	W1210 08:41:45.329487  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:41:45.329498  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:41:45.329568  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:41:45.369458  559515 cri.go:89] found id: ""
	I1210 08:41:45.369481  559515 logs.go:282] 0 containers: []
	W1210 08:41:45.369489  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:41:45.369495  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:41:45.369561  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:41:45.409218  559515 cri.go:89] found id: ""
	I1210 08:41:45.409241  559515 logs.go:282] 0 containers: []
	W1210 08:41:45.409249  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:41:45.409256  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:41:45.409315  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:41:45.452496  559515 cri.go:89] found id: ""
	I1210 08:41:45.452518  559515 logs.go:282] 0 containers: []
	W1210 08:41:45.452526  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:41:45.452532  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:41:45.452590  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:41:45.485187  559515 cri.go:89] found id: ""
	I1210 08:41:45.485209  559515 logs.go:282] 0 containers: []
	W1210 08:41:45.485217  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:41:45.485223  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:41:45.485280  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:41:45.531807  559515 cri.go:89] found id: ""
	I1210 08:41:45.531844  559515 logs.go:282] 0 containers: []
	W1210 08:41:45.531853  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:41:45.531862  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:41:45.531874  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:41:45.606678  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:41:45.606776  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:41:45.629606  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:41:45.629630  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:41:46.045769  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:41:46.045836  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:41:46.045864  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:41:46.082148  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:41:46.082181  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:41:48.624627  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:48.634925  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:41:48.634995  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:41:48.666768  559515 cri.go:89] found id: ""
	I1210 08:41:48.666791  559515 logs.go:282] 0 containers: []
	W1210 08:41:48.666800  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:41:48.666806  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:41:48.666866  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:41:48.695875  559515 cri.go:89] found id: ""
	I1210 08:41:48.695897  559515 logs.go:282] 0 containers: []
	W1210 08:41:48.695906  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:41:48.695912  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:41:48.695971  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:41:48.727005  559515 cri.go:89] found id: ""
	I1210 08:41:48.727055  559515 logs.go:282] 0 containers: []
	W1210 08:41:48.727065  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:41:48.727071  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:41:48.727132  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:41:48.752685  559515 cri.go:89] found id: ""
	I1210 08:41:48.752708  559515 logs.go:282] 0 containers: []
	W1210 08:41:48.752717  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:41:48.752726  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:41:48.752784  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:41:48.778041  559515 cri.go:89] found id: ""
	I1210 08:41:48.778065  559515 logs.go:282] 0 containers: []
	W1210 08:41:48.778076  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:41:48.778084  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:41:48.778144  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:41:48.804118  559515 cri.go:89] found id: ""
	I1210 08:41:48.804146  559515 logs.go:282] 0 containers: []
	W1210 08:41:48.804156  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:41:48.804163  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:41:48.804229  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:41:48.829307  559515 cri.go:89] found id: ""
	I1210 08:41:48.829330  559515 logs.go:282] 0 containers: []
	W1210 08:41:48.829339  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:41:48.829345  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:41:48.829456  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:41:48.857799  559515 cri.go:89] found id: ""
	I1210 08:41:48.857822  559515 logs.go:282] 0 containers: []
	W1210 08:41:48.857832  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:41:48.857842  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:41:48.857854  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:41:48.889734  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:41:48.889767  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:41:48.921510  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:41:48.921540  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:41:48.991602  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:41:48.991641  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:41:49.009091  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:41:49.009122  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:41:49.085222  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:41:51.585783  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:51.596104  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:41:51.596174  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:41:51.625601  559515 cri.go:89] found id: ""
	I1210 08:41:51.625627  559515 logs.go:282] 0 containers: []
	W1210 08:41:51.625636  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:41:51.625643  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:41:51.625703  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:41:51.653686  559515 cri.go:89] found id: ""
	I1210 08:41:51.653711  559515 logs.go:282] 0 containers: []
	W1210 08:41:51.653725  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:41:51.653731  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:41:51.653791  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:41:51.682350  559515 cri.go:89] found id: ""
	I1210 08:41:51.682373  559515 logs.go:282] 0 containers: []
	W1210 08:41:51.682382  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:41:51.682388  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:41:51.682449  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:41:51.714528  559515 cri.go:89] found id: ""
	I1210 08:41:51.714552  559515 logs.go:282] 0 containers: []
	W1210 08:41:51.714560  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:41:51.714566  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:41:51.714626  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:41:51.742331  559515 cri.go:89] found id: ""
	I1210 08:41:51.742354  559515 logs.go:282] 0 containers: []
	W1210 08:41:51.742362  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:41:51.742368  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:41:51.742431  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:41:51.769806  559515 cri.go:89] found id: ""
	I1210 08:41:51.769834  559515 logs.go:282] 0 containers: []
	W1210 08:41:51.769856  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:41:51.769863  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:41:51.769942  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:41:51.797667  559515 cri.go:89] found id: ""
	I1210 08:41:51.797694  559515 logs.go:282] 0 containers: []
	W1210 08:41:51.797703  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:41:51.797710  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:41:51.797809  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:41:51.823649  559515 cri.go:89] found id: ""
	I1210 08:41:51.823675  559515 logs.go:282] 0 containers: []
	W1210 08:41:51.823684  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:41:51.823693  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:41:51.823722  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:41:51.840562  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:41:51.840592  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:41:51.909702  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:41:51.909751  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:41:51.909765  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:41:51.940469  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:41:51.940502  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:41:51.968046  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:41:51.968075  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:41:54.536106  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:54.546413  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:41:54.546488  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:41:54.573502  559515 cri.go:89] found id: ""
	I1210 08:41:54.573529  559515 logs.go:282] 0 containers: []
	W1210 08:41:54.573538  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:41:54.573544  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:41:54.573603  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:41:54.600123  559515 cri.go:89] found id: ""
	I1210 08:41:54.600149  559515 logs.go:282] 0 containers: []
	W1210 08:41:54.600158  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:41:54.600163  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:41:54.600225  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:41:54.626341  559515 cri.go:89] found id: ""
	I1210 08:41:54.626375  559515 logs.go:282] 0 containers: []
	W1210 08:41:54.626384  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:41:54.626390  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:41:54.626456  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:41:54.652113  559515 cri.go:89] found id: ""
	I1210 08:41:54.652138  559515 logs.go:282] 0 containers: []
	W1210 08:41:54.652147  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:41:54.652153  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:41:54.652215  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:41:54.690392  559515 cri.go:89] found id: ""
	I1210 08:41:54.690415  559515 logs.go:282] 0 containers: []
	W1210 08:41:54.690424  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:41:54.690430  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:41:54.690492  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:41:54.719524  559515 cri.go:89] found id: ""
	I1210 08:41:54.719547  559515 logs.go:282] 0 containers: []
	W1210 08:41:54.719556  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:41:54.719562  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:41:54.719619  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:41:54.749378  559515 cri.go:89] found id: ""
	I1210 08:41:54.749404  559515 logs.go:282] 0 containers: []
	W1210 08:41:54.749413  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:41:54.749419  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:41:54.749487  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:41:54.779178  559515 cri.go:89] found id: ""
	I1210 08:41:54.779202  559515 logs.go:282] 0 containers: []
	W1210 08:41:54.779211  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:41:54.779220  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:41:54.779233  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:41:54.808005  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:41:54.808034  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:41:54.880438  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:41:54.880477  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:41:54.896710  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:41:54.896743  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:41:54.966834  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:41:54.966859  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:41:54.966872  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:41:57.499132  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:41:57.509489  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:41:57.509580  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:41:57.538350  559515 cri.go:89] found id: ""
	I1210 08:41:57.538377  559515 logs.go:282] 0 containers: []
	W1210 08:41:57.538392  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:41:57.538398  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:41:57.538461  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:41:57.566771  559515 cri.go:89] found id: ""
	I1210 08:41:57.566798  559515 logs.go:282] 0 containers: []
	W1210 08:41:57.566807  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:41:57.566816  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:41:57.566879  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:41:57.592244  559515 cri.go:89] found id: ""
	I1210 08:41:57.592269  559515 logs.go:282] 0 containers: []
	W1210 08:41:57.592278  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:41:57.592285  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:41:57.592365  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:41:57.624683  559515 cri.go:89] found id: ""
	I1210 08:41:57.624709  559515 logs.go:282] 0 containers: []
	W1210 08:41:57.624717  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:41:57.624723  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:41:57.624783  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:41:57.649386  559515 cri.go:89] found id: ""
	I1210 08:41:57.649411  559515 logs.go:282] 0 containers: []
	W1210 08:41:57.649420  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:41:57.649426  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:41:57.649485  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:41:57.693543  559515 cri.go:89] found id: ""
	I1210 08:41:57.693568  559515 logs.go:282] 0 containers: []
	W1210 08:41:57.693578  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:41:57.693584  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:41:57.693644  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:41:57.728176  559515 cri.go:89] found id: ""
	I1210 08:41:57.728201  559515 logs.go:282] 0 containers: []
	W1210 08:41:57.728209  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:41:57.728215  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:41:57.728293  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:41:57.754788  559515 cri.go:89] found id: ""
	I1210 08:41:57.754814  559515 logs.go:282] 0 containers: []
	W1210 08:41:57.754823  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:41:57.754831  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:41:57.754864  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:41:57.784969  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:41:57.784998  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:41:57.854126  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:41:57.854163  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:41:57.870316  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:41:57.870345  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:41:57.937715  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:41:57.937743  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:41:57.937755  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:42:00.470812  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:42:00.481478  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:42:00.481556  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:42:00.512162  559515 cri.go:89] found id: ""
	I1210 08:42:00.512204  559515 logs.go:282] 0 containers: []
	W1210 08:42:00.512217  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:42:00.512224  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:42:00.512303  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:42:00.544436  559515 cri.go:89] found id: ""
	I1210 08:42:00.544463  559515 logs.go:282] 0 containers: []
	W1210 08:42:00.544473  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:42:00.544494  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:42:00.544580  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:42:00.573050  559515 cri.go:89] found id: ""
	I1210 08:42:00.573075  559515 logs.go:282] 0 containers: []
	W1210 08:42:00.573084  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:42:00.573090  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:42:00.573216  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:42:00.599330  559515 cri.go:89] found id: ""
	I1210 08:42:00.599356  559515 logs.go:282] 0 containers: []
	W1210 08:42:00.599365  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:42:00.599371  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:42:00.599435  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:42:00.625722  559515 cri.go:89] found id: ""
	I1210 08:42:00.625799  559515 logs.go:282] 0 containers: []
	W1210 08:42:00.625822  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:42:00.625841  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:42:00.625931  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:42:00.651144  559515 cri.go:89] found id: ""
	I1210 08:42:00.651167  559515 logs.go:282] 0 containers: []
	W1210 08:42:00.651176  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:42:00.651182  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:42:00.651251  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:42:00.684623  559515 cri.go:89] found id: ""
	I1210 08:42:00.684648  559515 logs.go:282] 0 containers: []
	W1210 08:42:00.684657  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:42:00.684664  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:42:00.684727  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:42:00.713867  559515 cri.go:89] found id: ""
	I1210 08:42:00.713902  559515 logs.go:282] 0 containers: []
	W1210 08:42:00.713911  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:42:00.713920  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:42:00.713932  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:42:00.787185  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:42:00.787222  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:42:00.805272  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:42:00.805310  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:42:00.876528  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:42:00.876547  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:42:00.876573  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:42:00.907982  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:42:00.908020  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:42:03.437171  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:42:03.447740  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:42:03.447833  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:42:03.476585  559515 cri.go:89] found id: ""
	I1210 08:42:03.476612  559515 logs.go:282] 0 containers: []
	W1210 08:42:03.476621  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:42:03.476627  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:42:03.476691  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:42:03.502530  559515 cri.go:89] found id: ""
	I1210 08:42:03.502565  559515 logs.go:282] 0 containers: []
	W1210 08:42:03.502574  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:42:03.502580  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:42:03.502678  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:42:03.527955  559515 cri.go:89] found id: ""
	I1210 08:42:03.527987  559515 logs.go:282] 0 containers: []
	W1210 08:42:03.527996  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:42:03.528021  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:42:03.528106  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:42:03.558548  559515 cri.go:89] found id: ""
	I1210 08:42:03.558626  559515 logs.go:282] 0 containers: []
	W1210 08:42:03.558648  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:42:03.558665  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:42:03.558744  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:42:03.584105  559515 cri.go:89] found id: ""
	I1210 08:42:03.584129  559515 logs.go:282] 0 containers: []
	W1210 08:42:03.584138  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:42:03.584144  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:42:03.584206  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:42:03.611001  559515 cri.go:89] found id: ""
	I1210 08:42:03.611093  559515 logs.go:282] 0 containers: []
	W1210 08:42:03.611130  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:42:03.611156  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:42:03.611250  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:42:03.642177  559515 cri.go:89] found id: ""
	I1210 08:42:03.642256  559515 logs.go:282] 0 containers: []
	W1210 08:42:03.642279  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:42:03.642299  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:42:03.642388  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:42:03.675398  559515 cri.go:89] found id: ""
	I1210 08:42:03.675467  559515 logs.go:282] 0 containers: []
	W1210 08:42:03.675493  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:42:03.675514  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:42:03.675552  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:42:03.709471  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:42:03.709501  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:42:03.740825  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:42:03.740856  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:42:03.810022  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:42:03.810057  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:42:03.826453  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:42:03.826529  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:42:03.893264  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:42:06.393519  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:42:06.403395  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:42:06.403466  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:42:06.430670  559515 cri.go:89] found id: ""
	I1210 08:42:06.430698  559515 logs.go:282] 0 containers: []
	W1210 08:42:06.430707  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:42:06.430713  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:42:06.430775  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:42:06.457039  559515 cri.go:89] found id: ""
	I1210 08:42:06.457069  559515 logs.go:282] 0 containers: []
	W1210 08:42:06.457078  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:42:06.457084  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:42:06.457142  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:42:06.481737  559515 cri.go:89] found id: ""
	I1210 08:42:06.481762  559515 logs.go:282] 0 containers: []
	W1210 08:42:06.481770  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:42:06.481776  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:42:06.481838  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:42:06.508381  559515 cri.go:89] found id: ""
	I1210 08:42:06.508407  559515 logs.go:282] 0 containers: []
	W1210 08:42:06.508415  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:42:06.508423  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:42:06.508480  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:42:06.533787  559515 cri.go:89] found id: ""
	I1210 08:42:06.533814  559515 logs.go:282] 0 containers: []
	W1210 08:42:06.533823  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:42:06.533829  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:42:06.533890  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:42:06.560165  559515 cri.go:89] found id: ""
	I1210 08:42:06.560191  559515 logs.go:282] 0 containers: []
	W1210 08:42:06.560200  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:42:06.560207  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:42:06.560268  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:42:06.589509  559515 cri.go:89] found id: ""
	I1210 08:42:06.589534  559515 logs.go:282] 0 containers: []
	W1210 08:42:06.589543  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:42:06.589549  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:42:06.589622  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:42:06.618239  559515 cri.go:89] found id: ""
	I1210 08:42:06.618262  559515 logs.go:282] 0 containers: []
	W1210 08:42:06.618271  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:42:06.618280  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:42:06.618292  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:42:06.645544  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:42:06.645573  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:42:06.721456  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:42:06.721497  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:42:06.737044  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:42:06.737078  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:42:06.805497  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:42:06.805520  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:42:06.805533  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:42:09.336617  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:42:09.346789  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:42:09.346863  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:42:09.374086  559515 cri.go:89] found id: ""
	I1210 08:42:09.374109  559515 logs.go:282] 0 containers: []
	W1210 08:42:09.374117  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:42:09.374123  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:42:09.374187  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:42:09.402577  559515 cri.go:89] found id: ""
	I1210 08:42:09.402602  559515 logs.go:282] 0 containers: []
	W1210 08:42:09.402611  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:42:09.402617  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:42:09.402676  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:42:09.427905  559515 cri.go:89] found id: ""
	I1210 08:42:09.427933  559515 logs.go:282] 0 containers: []
	W1210 08:42:09.427941  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:42:09.427947  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:42:09.428007  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:42:09.453413  559515 cri.go:89] found id: ""
	I1210 08:42:09.453480  559515 logs.go:282] 0 containers: []
	W1210 08:42:09.453503  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:42:09.453523  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:42:09.453591  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:42:09.479799  559515 cri.go:89] found id: ""
	I1210 08:42:09.479822  559515 logs.go:282] 0 containers: []
	W1210 08:42:09.479830  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:42:09.479835  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:42:09.479928  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:42:09.508624  559515 cri.go:89] found id: ""
	I1210 08:42:09.508648  559515 logs.go:282] 0 containers: []
	W1210 08:42:09.508656  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:42:09.508662  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:42:09.508721  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:42:09.533435  559515 cri.go:89] found id: ""
	I1210 08:42:09.533468  559515 logs.go:282] 0 containers: []
	W1210 08:42:09.533477  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:42:09.533483  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:42:09.533564  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:42:09.558978  559515 cri.go:89] found id: ""
	I1210 08:42:09.559035  559515 logs.go:282] 0 containers: []
	W1210 08:42:09.559045  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:42:09.559055  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:42:09.559071  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:42:09.626133  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:42:09.626169  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:42:09.642132  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:42:09.642158  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:42:09.721513  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:42:09.721536  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:42:09.721548  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:42:09.753750  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:42:09.753792  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:42:12.285293  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:42:12.297426  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:42:12.297498  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:42:12.322738  559515 cri.go:89] found id: ""
	I1210 08:42:12.322764  559515 logs.go:282] 0 containers: []
	W1210 08:42:12.322773  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:42:12.322779  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:42:12.322837  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:42:12.349947  559515 cri.go:89] found id: ""
	I1210 08:42:12.349971  559515 logs.go:282] 0 containers: []
	W1210 08:42:12.349980  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:42:12.349985  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:42:12.350045  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:42:12.377989  559515 cri.go:89] found id: ""
	I1210 08:42:12.378013  559515 logs.go:282] 0 containers: []
	W1210 08:42:12.378022  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:42:12.378028  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:42:12.378090  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:42:12.403374  559515 cri.go:89] found id: ""
	I1210 08:42:12.403400  559515 logs.go:282] 0 containers: []
	W1210 08:42:12.403421  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:42:12.403427  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:42:12.403487  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:42:12.428678  559515 cri.go:89] found id: ""
	I1210 08:42:12.428704  559515 logs.go:282] 0 containers: []
	W1210 08:42:12.428714  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:42:12.428720  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:42:12.428783  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:42:12.457825  559515 cri.go:89] found id: ""
	I1210 08:42:12.457852  559515 logs.go:282] 0 containers: []
	W1210 08:42:12.457861  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:42:12.457867  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:42:12.457924  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:42:12.483255  559515 cri.go:89] found id: ""
	I1210 08:42:12.483281  559515 logs.go:282] 0 containers: []
	W1210 08:42:12.483289  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:42:12.483295  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:42:12.483353  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:42:12.513243  559515 cri.go:89] found id: ""
	I1210 08:42:12.513270  559515 logs.go:282] 0 containers: []
	W1210 08:42:12.513279  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:42:12.513288  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:42:12.513300  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:42:12.580650  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:42:12.580685  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:42:12.596364  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:42:12.596392  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:42:12.668194  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:42:12.668262  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:42:12.668289  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:42:12.703279  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:42:12.703361  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:42:15.246069  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:42:15.256239  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:42:15.256371  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:42:15.281228  559515 cri.go:89] found id: ""
	I1210 08:42:15.281255  559515 logs.go:282] 0 containers: []
	W1210 08:42:15.281264  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:42:15.281270  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:42:15.281337  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:42:15.306057  559515 cri.go:89] found id: ""
	I1210 08:42:15.306080  559515 logs.go:282] 0 containers: []
	W1210 08:42:15.306089  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:42:15.306097  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:42:15.306167  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:42:15.332727  559515 cri.go:89] found id: ""
	I1210 08:42:15.332753  559515 logs.go:282] 0 containers: []
	W1210 08:42:15.332767  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:42:15.332773  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:42:15.332832  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:42:15.357787  559515 cri.go:89] found id: ""
	I1210 08:42:15.357812  559515 logs.go:282] 0 containers: []
	W1210 08:42:15.357821  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:42:15.357827  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:42:15.357882  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:42:15.387117  559515 cri.go:89] found id: ""
	I1210 08:42:15.387145  559515 logs.go:282] 0 containers: []
	W1210 08:42:15.387154  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:42:15.387160  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:42:15.387221  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:42:15.414919  559515 cri.go:89] found id: ""
	I1210 08:42:15.414945  559515 logs.go:282] 0 containers: []
	W1210 08:42:15.414954  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:42:15.414961  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:42:15.415051  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:42:15.439224  559515 cri.go:89] found id: ""
	I1210 08:42:15.439252  559515 logs.go:282] 0 containers: []
	W1210 08:42:15.439261  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:42:15.439266  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:42:15.439323  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:42:15.465116  559515 cri.go:89] found id: ""
	I1210 08:42:15.465138  559515 logs.go:282] 0 containers: []
	W1210 08:42:15.465146  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:42:15.465155  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:42:15.465169  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:42:15.532317  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:42:15.532355  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:42:15.548402  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:42:15.548435  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:42:15.613983  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:42:15.614003  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:42:15.614020  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:42:15.645349  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:42:15.645383  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:42:18.209689  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:42:18.220637  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:42:18.220709  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:42:18.246454  559515 cri.go:89] found id: ""
	I1210 08:42:18.246484  559515 logs.go:282] 0 containers: []
	W1210 08:42:18.246494  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:42:18.246500  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:42:18.246562  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:42:18.273362  559515 cri.go:89] found id: ""
	I1210 08:42:18.273386  559515 logs.go:282] 0 containers: []
	W1210 08:42:18.273394  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:42:18.273400  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:42:18.273461  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:42:18.302608  559515 cri.go:89] found id: ""
	I1210 08:42:18.302634  559515 logs.go:282] 0 containers: []
	W1210 08:42:18.302642  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:42:18.302648  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:42:18.302725  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:42:18.327794  559515 cri.go:89] found id: ""
	I1210 08:42:18.327822  559515 logs.go:282] 0 containers: []
	W1210 08:42:18.327831  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:42:18.327837  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:42:18.327905  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:42:18.357764  559515 cri.go:89] found id: ""
	I1210 08:42:18.357791  559515 logs.go:282] 0 containers: []
	W1210 08:42:18.357800  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:42:18.357806  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:42:18.357873  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:42:18.383182  559515 cri.go:89] found id: ""
	I1210 08:42:18.383215  559515 logs.go:282] 0 containers: []
	W1210 08:42:18.383224  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:42:18.383231  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:42:18.383289  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:42:18.412410  559515 cri.go:89] found id: ""
	I1210 08:42:18.412435  559515 logs.go:282] 0 containers: []
	W1210 08:42:18.412444  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:42:18.412450  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:42:18.412509  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:42:18.438300  559515 cri.go:89] found id: ""
	I1210 08:42:18.438371  559515 logs.go:282] 0 containers: []
	W1210 08:42:18.438394  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:42:18.438416  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:42:18.438456  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:42:18.504649  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:42:18.504671  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:42:18.504685  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:42:18.540558  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:42:18.540601  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:42:18.569628  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:42:18.569665  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:42:18.641612  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:42:18.641670  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:42:21.160564  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:42:21.170557  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:42:21.170630  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:42:21.196778  559515 cri.go:89] found id: ""
	I1210 08:42:21.196803  559515 logs.go:282] 0 containers: []
	W1210 08:42:21.196812  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:42:21.196818  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:42:21.196902  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:42:21.224128  559515 cri.go:89] found id: ""
	I1210 08:42:21.224151  559515 logs.go:282] 0 containers: []
	W1210 08:42:21.224159  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:42:21.224165  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:42:21.224224  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:42:21.248869  559515 cri.go:89] found id: ""
	I1210 08:42:21.248897  559515 logs.go:282] 0 containers: []
	W1210 08:42:21.248906  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:42:21.248912  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:42:21.248973  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:42:21.278022  559515 cri.go:89] found id: ""
	I1210 08:42:21.278050  559515 logs.go:282] 0 containers: []
	W1210 08:42:21.278058  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:42:21.278064  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:42:21.278163  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:42:21.303269  559515 cri.go:89] found id: ""
	I1210 08:42:21.303293  559515 logs.go:282] 0 containers: []
	W1210 08:42:21.303302  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:42:21.303307  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:42:21.303366  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:42:21.328388  559515 cri.go:89] found id: ""
	I1210 08:42:21.328413  559515 logs.go:282] 0 containers: []
	W1210 08:42:21.328423  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:42:21.328429  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:42:21.328488  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:42:21.354774  559515 cri.go:89] found id: ""
	I1210 08:42:21.354802  559515 logs.go:282] 0 containers: []
	W1210 08:42:21.354811  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:42:21.354817  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:42:21.354876  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:42:21.380126  559515 cri.go:89] found id: ""
	I1210 08:42:21.380153  559515 logs.go:282] 0 containers: []
	W1210 08:42:21.380162  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:42:21.380171  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:42:21.380182  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:42:21.414415  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:42:21.414444  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:42:21.481914  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:42:21.481951  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:42:21.499367  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:42:21.499398  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:42:21.563059  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:42:21.563090  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:42:21.563105  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:42:24.096867  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:42:24.107089  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:42:24.107165  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:42:24.133521  559515 cri.go:89] found id: ""
	I1210 08:42:24.133548  559515 logs.go:282] 0 containers: []
	W1210 08:42:24.133557  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:42:24.133563  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:42:24.133624  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:42:24.161946  559515 cri.go:89] found id: ""
	I1210 08:42:24.161982  559515 logs.go:282] 0 containers: []
	W1210 08:42:24.161991  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:42:24.161997  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:42:24.162067  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:42:24.190091  559515 cri.go:89] found id: ""
	I1210 08:42:24.190166  559515 logs.go:282] 0 containers: []
	W1210 08:42:24.190188  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:42:24.190206  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:42:24.190294  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:42:24.216240  559515 cri.go:89] found id: ""
	I1210 08:42:24.216268  559515 logs.go:282] 0 containers: []
	W1210 08:42:24.216277  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:42:24.216283  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:42:24.216344  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:42:24.246174  559515 cri.go:89] found id: ""
	I1210 08:42:24.246250  559515 logs.go:282] 0 containers: []
	W1210 08:42:24.246281  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:42:24.246301  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:42:24.246409  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:42:24.271274  559515 cri.go:89] found id: ""
	I1210 08:42:24.271298  559515 logs.go:282] 0 containers: []
	W1210 08:42:24.271307  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:42:24.271313  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:42:24.271372  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:42:24.296980  559515 cri.go:89] found id: ""
	I1210 08:42:24.297006  559515 logs.go:282] 0 containers: []
	W1210 08:42:24.297015  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:42:24.297021  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:42:24.297083  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:42:24.322910  559515 cri.go:89] found id: ""
	I1210 08:42:24.322936  559515 logs.go:282] 0 containers: []
	W1210 08:42:24.322945  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:42:24.322955  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:42:24.322970  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:42:24.390860  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:42:24.390895  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:42:24.406893  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:42:24.406921  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:42:24.476431  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:42:24.476455  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:42:24.476469  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:42:24.506372  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:42:24.506408  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:42:27.035157  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:42:27.046510  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:42:27.046586  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:42:27.075794  559515 cri.go:89] found id: ""
	I1210 08:42:27.075820  559515 logs.go:282] 0 containers: []
	W1210 08:42:27.075831  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:42:27.075837  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:42:27.075894  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:42:27.106573  559515 cri.go:89] found id: ""
	I1210 08:42:27.106600  559515 logs.go:282] 0 containers: []
	W1210 08:42:27.106609  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:42:27.106615  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:42:27.106674  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:42:27.134791  559515 cri.go:89] found id: ""
	I1210 08:42:27.134817  559515 logs.go:282] 0 containers: []
	W1210 08:42:27.134826  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:42:27.134832  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:42:27.134889  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:42:27.165401  559515 cri.go:89] found id: ""
	I1210 08:42:27.165424  559515 logs.go:282] 0 containers: []
	W1210 08:42:27.165433  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:42:27.165439  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:42:27.165499  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:42:27.194221  559515 cri.go:89] found id: ""
	I1210 08:42:27.194244  559515 logs.go:282] 0 containers: []
	W1210 08:42:27.194252  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:42:27.194258  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:42:27.194326  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:42:27.230253  559515 cri.go:89] found id: ""
	I1210 08:42:27.230275  559515 logs.go:282] 0 containers: []
	W1210 08:42:27.230284  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:42:27.230290  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:42:27.230359  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:42:27.261339  559515 cri.go:89] found id: ""
	I1210 08:42:27.261361  559515 logs.go:282] 0 containers: []
	W1210 08:42:27.261369  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:42:27.261376  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:42:27.261441  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:42:27.300953  559515 cri.go:89] found id: ""
	I1210 08:42:27.300975  559515 logs.go:282] 0 containers: []
	W1210 08:42:27.300984  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:42:27.300992  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:42:27.301004  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:42:27.345186  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:42:27.345268  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:42:27.425736  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:42:27.425840  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:42:27.441598  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:42:27.441624  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:42:27.527099  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:42:27.527169  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:42:27.527203  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:42:30.063241  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:42:30.075426  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:42:30.075505  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:42:30.103379  559515 cri.go:89] found id: ""
	I1210 08:42:30.103410  559515 logs.go:282] 0 containers: []
	W1210 08:42:30.103421  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:42:30.103428  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:42:30.103490  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:42:30.131045  559515 cri.go:89] found id: ""
	I1210 08:42:30.131071  559515 logs.go:282] 0 containers: []
	W1210 08:42:30.131080  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:42:30.131086  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:42:30.131151  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:42:30.162597  559515 cri.go:89] found id: ""
	I1210 08:42:30.162622  559515 logs.go:282] 0 containers: []
	W1210 08:42:30.162630  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:42:30.162637  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:42:30.162703  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:42:30.189715  559515 cri.go:89] found id: ""
	I1210 08:42:30.189740  559515 logs.go:282] 0 containers: []
	W1210 08:42:30.189749  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:42:30.189755  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:42:30.189828  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:42:30.216155  559515 cri.go:89] found id: ""
	I1210 08:42:30.216179  559515 logs.go:282] 0 containers: []
	W1210 08:42:30.216187  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:42:30.216193  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:42:30.216257  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:42:30.242629  559515 cri.go:89] found id: ""
	I1210 08:42:30.242716  559515 logs.go:282] 0 containers: []
	W1210 08:42:30.242739  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:42:30.242757  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:42:30.242882  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:42:30.272871  559515 cri.go:89] found id: ""
	I1210 08:42:30.272899  559515 logs.go:282] 0 containers: []
	W1210 08:42:30.272907  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:42:30.272913  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:42:30.272975  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:42:30.299655  559515 cri.go:89] found id: ""
	I1210 08:42:30.299681  559515 logs.go:282] 0 containers: []
	W1210 08:42:30.299690  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:42:30.299699  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:42:30.299712  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:42:30.365678  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:42:30.365698  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:42:30.365716  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:42:30.397989  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:42:30.398026  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:42:30.426315  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:42:30.426345  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:42:30.497112  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:42:30.497149  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:42:33.014353  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:42:33.025264  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:42:33.025335  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:42:33.052454  559515 cri.go:89] found id: ""
	I1210 08:42:33.052479  559515 logs.go:282] 0 containers: []
	W1210 08:42:33.052488  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:42:33.052494  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:42:33.052555  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:42:33.088628  559515 cri.go:89] found id: ""
	I1210 08:42:33.088698  559515 logs.go:282] 0 containers: []
	W1210 08:42:33.088726  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:42:33.088738  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:42:33.088819  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:42:33.115380  559515 cri.go:89] found id: ""
	I1210 08:42:33.115406  559515 logs.go:282] 0 containers: []
	W1210 08:42:33.115416  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:42:33.115422  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:42:33.115483  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:42:33.145517  559515 cri.go:89] found id: ""
	I1210 08:42:33.145545  559515 logs.go:282] 0 containers: []
	W1210 08:42:33.145554  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:42:33.145560  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:42:33.145628  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:42:33.173056  559515 cri.go:89] found id: ""
	I1210 08:42:33.173084  559515 logs.go:282] 0 containers: []
	W1210 08:42:33.173092  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:42:33.173098  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:42:33.173159  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:42:33.198908  559515 cri.go:89] found id: ""
	I1210 08:42:33.198935  559515 logs.go:282] 0 containers: []
	W1210 08:42:33.198944  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:42:33.198950  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:42:33.199051  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:42:33.223794  559515 cri.go:89] found id: ""
	I1210 08:42:33.223818  559515 logs.go:282] 0 containers: []
	W1210 08:42:33.223827  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:42:33.223833  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:42:33.223897  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:42:33.248884  559515 cri.go:89] found id: ""
	I1210 08:42:33.248907  559515 logs.go:282] 0 containers: []
	W1210 08:42:33.248916  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:42:33.248924  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:42:33.248937  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:42:33.265342  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:42:33.265374  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:42:33.332133  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:42:33.332195  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:42:33.332223  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:42:33.368984  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:42:33.369025  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:42:33.396886  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:42:33.396914  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:42:35.964397  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:42:35.974642  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:42:35.974716  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:42:36.006460  559515 cri.go:89] found id: ""
	I1210 08:42:36.006490  559515 logs.go:282] 0 containers: []
	W1210 08:42:36.006501  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:42:36.006508  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:42:36.006583  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:42:36.036021  559515 cri.go:89] found id: ""
	I1210 08:42:36.036049  559515 logs.go:282] 0 containers: []
	W1210 08:42:36.036058  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:42:36.036065  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:42:36.036126  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:42:36.062167  559515 cri.go:89] found id: ""
	I1210 08:42:36.062194  559515 logs.go:282] 0 containers: []
	W1210 08:42:36.062203  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:42:36.062209  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:42:36.062271  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:42:36.089939  559515 cri.go:89] found id: ""
	I1210 08:42:36.089966  559515 logs.go:282] 0 containers: []
	W1210 08:42:36.089975  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:42:36.089981  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:42:36.090088  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:42:36.116371  559515 cri.go:89] found id: ""
	I1210 08:42:36.116437  559515 logs.go:282] 0 containers: []
	W1210 08:42:36.116459  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:42:36.116478  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:42:36.116569  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:42:36.145964  559515 cri.go:89] found id: ""
	I1210 08:42:36.145990  559515 logs.go:282] 0 containers: []
	W1210 08:42:36.145999  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:42:36.146005  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:42:36.146113  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:42:36.170982  559515 cri.go:89] found id: ""
	I1210 08:42:36.171046  559515 logs.go:282] 0 containers: []
	W1210 08:42:36.171055  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:42:36.171061  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:42:36.171126  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:42:36.200715  559515 cri.go:89] found id: ""
	I1210 08:42:36.200740  559515 logs.go:282] 0 containers: []
	W1210 08:42:36.200748  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:42:36.200757  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:42:36.200787  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:42:36.268423  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:42:36.268460  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:42:36.284604  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:42:36.284687  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:42:36.352514  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:42:36.352534  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:42:36.352547  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:42:36.382935  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:42:36.382967  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:42:38.911267  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:42:38.923058  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:42:38.923135  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:42:38.953644  559515 cri.go:89] found id: ""
	I1210 08:42:38.953671  559515 logs.go:282] 0 containers: []
	W1210 08:42:38.953680  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:42:38.953696  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:42:38.953762  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:42:38.980825  559515 cri.go:89] found id: ""
	I1210 08:42:38.980851  559515 logs.go:282] 0 containers: []
	W1210 08:42:38.980860  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:42:38.980866  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:42:38.980929  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:42:39.011808  559515 cri.go:89] found id: ""
	I1210 08:42:39.011836  559515 logs.go:282] 0 containers: []
	W1210 08:42:39.011846  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:42:39.011852  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:42:39.011919  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:42:39.038907  559515 cri.go:89] found id: ""
	I1210 08:42:39.038930  559515 logs.go:282] 0 containers: []
	W1210 08:42:39.038939  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:42:39.038945  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:42:39.039040  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:42:39.066470  559515 cri.go:89] found id: ""
	I1210 08:42:39.066540  559515 logs.go:282] 0 containers: []
	W1210 08:42:39.066566  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:42:39.066578  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:42:39.066653  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:42:39.092672  559515 cri.go:89] found id: ""
	I1210 08:42:39.092698  559515 logs.go:282] 0 containers: []
	W1210 08:42:39.092707  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:42:39.092714  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:42:39.092772  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:42:39.118295  559515 cri.go:89] found id: ""
	I1210 08:42:39.118322  559515 logs.go:282] 0 containers: []
	W1210 08:42:39.118331  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:42:39.118337  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:42:39.118397  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:42:39.144960  559515 cri.go:89] found id: ""
	I1210 08:42:39.144984  559515 logs.go:282] 0 containers: []
	W1210 08:42:39.144993  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:42:39.145003  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:42:39.145037  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:42:39.212786  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:42:39.212820  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:42:39.228677  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:42:39.228706  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:42:39.294733  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:42:39.294755  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:42:39.294770  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:42:39.325392  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:42:39.325427  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:42:41.856508  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:42:41.867654  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:42:41.867731  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:42:41.895576  559515 cri.go:89] found id: ""
	I1210 08:42:41.895602  559515 logs.go:282] 0 containers: []
	W1210 08:42:41.895611  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:42:41.895617  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:42:41.895678  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:42:41.940523  559515 cri.go:89] found id: ""
	I1210 08:42:41.940547  559515 logs.go:282] 0 containers: []
	W1210 08:42:41.940556  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:42:41.940562  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:42:41.940629  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:42:41.970843  559515 cri.go:89] found id: ""
	I1210 08:42:41.970865  559515 logs.go:282] 0 containers: []
	W1210 08:42:41.970874  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:42:41.970879  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:42:41.970939  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:42:42.001316  559515 cri.go:89] found id: ""
	I1210 08:42:42.001345  559515 logs.go:282] 0 containers: []
	W1210 08:42:42.001354  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:42:42.001361  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:42:42.001432  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:42:42.035665  559515 cri.go:89] found id: ""
	I1210 08:42:42.035689  559515 logs.go:282] 0 containers: []
	W1210 08:42:42.035698  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:42:42.035704  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:42:42.035803  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:42:42.062443  559515 cri.go:89] found id: ""
	I1210 08:42:42.062481  559515 logs.go:282] 0 containers: []
	W1210 08:42:42.062495  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:42:42.062502  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:42:42.062573  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:42:42.104793  559515 cri.go:89] found id: ""
	I1210 08:42:42.104827  559515 logs.go:282] 0 containers: []
	W1210 08:42:42.104838  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:42:42.104845  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:42:42.104919  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:42:42.197548  559515 cri.go:89] found id: ""
	I1210 08:42:42.197582  559515 logs.go:282] 0 containers: []
	W1210 08:42:42.197598  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:42:42.197609  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:42:42.197660  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:42:42.273725  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:42:42.273766  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:42:42.291439  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:42:42.291472  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:42:42.362618  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:42:42.362641  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:42:42.362653  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:42:42.393975  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:42:42.394013  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:42:44.924233  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:42:44.936006  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:42:44.936085  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:42:44.966129  559515 cri.go:89] found id: ""
	I1210 08:42:44.966156  559515 logs.go:282] 0 containers: []
	W1210 08:42:44.966165  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:42:44.966171  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:42:44.966232  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:42:44.993595  559515 cri.go:89] found id: ""
	I1210 08:42:44.993621  559515 logs.go:282] 0 containers: []
	W1210 08:42:44.993630  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:42:44.993635  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:42:44.993710  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:42:45.032359  559515 cri.go:89] found id: ""
	I1210 08:42:45.032383  559515 logs.go:282] 0 containers: []
	W1210 08:42:45.032392  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:42:45.032398  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:42:45.032467  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:42:45.082095  559515 cri.go:89] found id: ""
	I1210 08:42:45.082135  559515 logs.go:282] 0 containers: []
	W1210 08:42:45.082146  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:42:45.082153  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:42:45.082238  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:42:45.126127  559515 cri.go:89] found id: ""
	I1210 08:42:45.126153  559515 logs.go:282] 0 containers: []
	W1210 08:42:45.126163  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:42:45.126169  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:42:45.126238  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:42:45.168039  559515 cri.go:89] found id: ""
	I1210 08:42:45.168074  559515 logs.go:282] 0 containers: []
	W1210 08:42:45.168086  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:42:45.168097  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:42:45.168199  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:42:45.211104  559515 cri.go:89] found id: ""
	I1210 08:42:45.211130  559515 logs.go:282] 0 containers: []
	W1210 08:42:45.211139  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:42:45.211146  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:42:45.211221  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:42:45.247977  559515 cri.go:89] found id: ""
	I1210 08:42:45.248014  559515 logs.go:282] 0 containers: []
	W1210 08:42:45.248024  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:42:45.248034  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:42:45.248048  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:42:45.321015  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:42:45.321056  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:42:45.337400  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:42:45.337439  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:42:45.411542  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:42:45.411607  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:42:45.411633  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:42:45.442456  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:42:45.442490  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:42:47.972245  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:42:47.982303  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:42:47.982375  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:42:48.012838  559515 cri.go:89] found id: ""
	I1210 08:42:48.012870  559515 logs.go:282] 0 containers: []
	W1210 08:42:48.012879  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:42:48.012886  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:42:48.012960  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:42:48.045499  559515 cri.go:89] found id: ""
	I1210 08:42:48.045525  559515 logs.go:282] 0 containers: []
	W1210 08:42:48.045534  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:42:48.045540  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:42:48.045599  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:42:48.074775  559515 cri.go:89] found id: ""
	I1210 08:42:48.074799  559515 logs.go:282] 0 containers: []
	W1210 08:42:48.074809  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:42:48.074815  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:42:48.074886  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:42:48.103327  559515 cri.go:89] found id: ""
	I1210 08:42:48.103354  559515 logs.go:282] 0 containers: []
	W1210 08:42:48.103364  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:42:48.103371  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:42:48.103430  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:42:48.129685  559515 cri.go:89] found id: ""
	I1210 08:42:48.129722  559515 logs.go:282] 0 containers: []
	W1210 08:42:48.129732  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:42:48.129738  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:42:48.129824  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:42:48.158491  559515 cri.go:89] found id: ""
	I1210 08:42:48.158525  559515 logs.go:282] 0 containers: []
	W1210 08:42:48.158534  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:42:48.158541  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:42:48.158614  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:42:48.185429  559515 cri.go:89] found id: ""
	I1210 08:42:48.185455  559515 logs.go:282] 0 containers: []
	W1210 08:42:48.185464  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:42:48.185470  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:42:48.185544  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:42:48.214478  559515 cri.go:89] found id: ""
	I1210 08:42:48.214511  559515 logs.go:282] 0 containers: []
	W1210 08:42:48.214521  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:42:48.214530  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:42:48.214541  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:42:48.250476  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:42:48.250569  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:42:48.324812  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:42:48.324861  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:42:48.341709  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:42:48.341752  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:42:48.410494  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:42:48.410516  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:42:48.410529  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:42:50.941735  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:42:50.962632  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:42:50.962750  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:42:50.994219  559515 cri.go:89] found id: ""
	I1210 08:42:50.994246  559515 logs.go:282] 0 containers: []
	W1210 08:42:50.994256  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:42:50.994262  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:42:50.994341  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:42:51.024330  559515 cri.go:89] found id: ""
	I1210 08:42:51.024353  559515 logs.go:282] 0 containers: []
	W1210 08:42:51.024363  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:42:51.024369  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:42:51.024461  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:42:51.051629  559515 cri.go:89] found id: ""
	I1210 08:42:51.051654  559515 logs.go:282] 0 containers: []
	W1210 08:42:51.051663  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:42:51.051669  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:42:51.051736  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:42:51.078466  559515 cri.go:89] found id: ""
	I1210 08:42:51.078492  559515 logs.go:282] 0 containers: []
	W1210 08:42:51.078501  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:42:51.078507  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:42:51.078572  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:42:51.108585  559515 cri.go:89] found id: ""
	I1210 08:42:51.108612  559515 logs.go:282] 0 containers: []
	W1210 08:42:51.108621  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:42:51.108627  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:42:51.108700  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:42:51.134135  559515 cri.go:89] found id: ""
	I1210 08:42:51.134159  559515 logs.go:282] 0 containers: []
	W1210 08:42:51.134168  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:42:51.134175  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:42:51.134239  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:42:51.161944  559515 cri.go:89] found id: ""
	I1210 08:42:51.161978  559515 logs.go:282] 0 containers: []
	W1210 08:42:51.161987  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:42:51.161994  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:42:51.162082  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:42:51.190629  559515 cri.go:89] found id: ""
	I1210 08:42:51.190664  559515 logs.go:282] 0 containers: []
	W1210 08:42:51.190673  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:42:51.190682  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:42:51.190693  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:42:51.257785  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:42:51.257865  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:42:51.274445  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:42:51.274476  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:42:51.343823  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:42:51.343842  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:42:51.343855  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:42:51.375265  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:42:51.375302  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:42:53.903854  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:42:53.916083  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:42:53.916165  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:42:53.945036  559515 cri.go:89] found id: ""
	I1210 08:42:53.945058  559515 logs.go:282] 0 containers: []
	W1210 08:42:53.945066  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:42:53.945073  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:42:53.945138  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:42:53.977220  559515 cri.go:89] found id: ""
	I1210 08:42:53.977243  559515 logs.go:282] 0 containers: []
	W1210 08:42:53.977252  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:42:53.977258  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:42:53.977320  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:42:54.006826  559515 cri.go:89] found id: ""
	I1210 08:42:54.006853  559515 logs.go:282] 0 containers: []
	W1210 08:42:54.006863  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:42:54.006870  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:42:54.006958  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:42:54.036061  559515 cri.go:89] found id: ""
	I1210 08:42:54.036087  559515 logs.go:282] 0 containers: []
	W1210 08:42:54.036096  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:42:54.036103  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:42:54.036165  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:42:54.063805  559515 cri.go:89] found id: ""
	I1210 08:42:54.063845  559515 logs.go:282] 0 containers: []
	W1210 08:42:54.063855  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:42:54.063862  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:42:54.063939  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:42:54.094780  559515 cri.go:89] found id: ""
	I1210 08:42:54.094849  559515 logs.go:282] 0 containers: []
	W1210 08:42:54.094871  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:42:54.094889  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:42:54.094981  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:42:54.126187  559515 cri.go:89] found id: ""
	I1210 08:42:54.126266  559515 logs.go:282] 0 containers: []
	W1210 08:42:54.126291  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:42:54.126310  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:42:54.126402  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:42:54.155915  559515 cri.go:89] found id: ""
	I1210 08:42:54.155983  559515 logs.go:282] 0 containers: []
	W1210 08:42:54.156006  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:42:54.156027  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:42:54.156067  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:42:54.185276  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:42:54.185306  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:42:54.261349  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:42:54.261396  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:42:54.278720  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:42:54.278755  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:42:54.348295  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:42:54.348316  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:42:54.348329  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:42:56.881601  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:42:56.894928  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:42:56.895036  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:42:56.958004  559515 cri.go:89] found id: ""
	I1210 08:42:56.958033  559515 logs.go:282] 0 containers: []
	W1210 08:42:56.958043  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:42:56.958054  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:42:56.958149  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:42:57.048306  559515 cri.go:89] found id: ""
	I1210 08:42:57.048343  559515 logs.go:282] 0 containers: []
	W1210 08:42:57.048353  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:42:57.048366  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:42:57.048441  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:42:57.092338  559515 cri.go:89] found id: ""
	I1210 08:42:57.092372  559515 logs.go:282] 0 containers: []
	W1210 08:42:57.092383  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:42:57.092390  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:42:57.092467  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:42:57.142143  559515 cri.go:89] found id: ""
	I1210 08:42:57.142172  559515 logs.go:282] 0 containers: []
	W1210 08:42:57.142183  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:42:57.142190  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:42:57.142256  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:42:57.185922  559515 cri.go:89] found id: ""
	I1210 08:42:57.186000  559515 logs.go:282] 0 containers: []
	W1210 08:42:57.186023  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:42:57.186044  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:42:57.186135  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:42:57.218445  559515 cri.go:89] found id: ""
	I1210 08:42:57.218523  559515 logs.go:282] 0 containers: []
	W1210 08:42:57.218553  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:42:57.218573  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:42:57.218682  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:42:57.257295  559515 cri.go:89] found id: ""
	I1210 08:42:57.257380  559515 logs.go:282] 0 containers: []
	W1210 08:42:57.257405  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:42:57.257433  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:42:57.257534  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:42:57.301438  559515 cri.go:89] found id: ""
	I1210 08:42:57.301514  559515 logs.go:282] 0 containers: []
	W1210 08:42:57.301543  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:42:57.301567  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:42:57.301652  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:42:57.398241  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:42:57.398350  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:42:57.416157  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:42:57.416245  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:42:57.520462  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:42:57.520537  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:42:57.520567  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:42:57.556050  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:42:57.556083  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:43:00.100646  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:43:00.136498  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:43:00.136618  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:43:00.242475  559515 cri.go:89] found id: ""
	I1210 08:43:00.242501  559515 logs.go:282] 0 containers: []
	W1210 08:43:00.242511  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:43:00.242518  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:43:00.242590  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:43:00.335827  559515 cri.go:89] found id: ""
	I1210 08:43:00.335861  559515 logs.go:282] 0 containers: []
	W1210 08:43:00.335872  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:43:00.335879  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:43:00.336009  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:43:00.391402  559515 cri.go:89] found id: ""
	I1210 08:43:00.391488  559515 logs.go:282] 0 containers: []
	W1210 08:43:00.391517  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:43:00.391558  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:43:00.391702  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:43:00.428826  559515 cri.go:89] found id: ""
	I1210 08:43:00.428862  559515 logs.go:282] 0 containers: []
	W1210 08:43:00.428873  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:43:00.428882  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:43:00.428951  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:43:00.462893  559515 cri.go:89] found id: ""
	I1210 08:43:00.462929  559515 logs.go:282] 0 containers: []
	W1210 08:43:00.462939  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:43:00.462952  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:43:00.463048  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:43:00.493753  559515 cri.go:89] found id: ""
	I1210 08:43:00.493856  559515 logs.go:282] 0 containers: []
	W1210 08:43:00.493881  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:43:00.493900  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:43:00.494055  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:43:00.526699  559515 cri.go:89] found id: ""
	I1210 08:43:00.526770  559515 logs.go:282] 0 containers: []
	W1210 08:43:00.526792  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:43:00.526809  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:43:00.526903  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:43:00.553900  559515 cri.go:89] found id: ""
	I1210 08:43:00.553972  559515 logs.go:282] 0 containers: []
	W1210 08:43:00.553995  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:43:00.554015  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:43:00.554053  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:43:00.584222  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:43:00.584295  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:43:00.651884  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:43:00.651923  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:43:00.670910  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:43:00.671000  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:43:00.748606  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:43:00.748679  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:43:00.748707  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:43:03.280566  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:43:03.290870  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:43:03.290939  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:43:03.319380  559515 cri.go:89] found id: ""
	I1210 08:43:03.319405  559515 logs.go:282] 0 containers: []
	W1210 08:43:03.319414  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:43:03.319420  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:43:03.319478  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:43:03.346234  559515 cri.go:89] found id: ""
	I1210 08:43:03.346261  559515 logs.go:282] 0 containers: []
	W1210 08:43:03.346270  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:43:03.346276  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:43:03.346336  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:43:03.371813  559515 cri.go:89] found id: ""
	I1210 08:43:03.371838  559515 logs.go:282] 0 containers: []
	W1210 08:43:03.371847  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:43:03.371853  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:43:03.371912  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:43:03.397959  559515 cri.go:89] found id: ""
	I1210 08:43:03.397985  559515 logs.go:282] 0 containers: []
	W1210 08:43:03.397994  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:43:03.398000  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:43:03.398063  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:43:03.428596  559515 cri.go:89] found id: ""
	I1210 08:43:03.428622  559515 logs.go:282] 0 containers: []
	W1210 08:43:03.428631  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:43:03.428637  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:43:03.428698  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:43:03.454271  559515 cri.go:89] found id: ""
	I1210 08:43:03.454297  559515 logs.go:282] 0 containers: []
	W1210 08:43:03.454306  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:43:03.454313  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:43:03.454374  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:43:03.479539  559515 cri.go:89] found id: ""
	I1210 08:43:03.479564  559515 logs.go:282] 0 containers: []
	W1210 08:43:03.479572  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:43:03.479579  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:43:03.479635  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:43:03.505010  559515 cri.go:89] found id: ""
	I1210 08:43:03.505037  559515 logs.go:282] 0 containers: []
	W1210 08:43:03.505046  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:43:03.505054  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:43:03.505073  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:43:03.573836  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:43:03.573872  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:43:03.590383  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:43:03.590411  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:43:03.662624  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:43:03.662644  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:43:03.662657  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:43:03.716136  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:43:03.716244  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:43:06.249264  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:43:06.259908  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:43:06.259981  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:43:06.287144  559515 cri.go:89] found id: ""
	I1210 08:43:06.287170  559515 logs.go:282] 0 containers: []
	W1210 08:43:06.287180  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:43:06.287187  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:43:06.287251  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:43:06.312722  559515 cri.go:89] found id: ""
	I1210 08:43:06.312747  559515 logs.go:282] 0 containers: []
	W1210 08:43:06.312756  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:43:06.312762  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:43:06.312823  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:43:06.338210  559515 cri.go:89] found id: ""
	I1210 08:43:06.338235  559515 logs.go:282] 0 containers: []
	W1210 08:43:06.338243  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:43:06.338249  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:43:06.338309  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:43:06.366252  559515 cri.go:89] found id: ""
	I1210 08:43:06.366278  559515 logs.go:282] 0 containers: []
	W1210 08:43:06.366288  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:43:06.366294  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:43:06.366360  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:43:06.392047  559515 cri.go:89] found id: ""
	I1210 08:43:06.392072  559515 logs.go:282] 0 containers: []
	W1210 08:43:06.392081  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:43:06.392088  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:43:06.392172  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:43:06.418531  559515 cri.go:89] found id: ""
	I1210 08:43:06.418554  559515 logs.go:282] 0 containers: []
	W1210 08:43:06.418562  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:43:06.418569  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:43:06.418658  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:43:06.448959  559515 cri.go:89] found id: ""
	I1210 08:43:06.448984  559515 logs.go:282] 0 containers: []
	W1210 08:43:06.448993  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:43:06.448999  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:43:06.449064  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:43:06.480767  559515 cri.go:89] found id: ""
	I1210 08:43:06.480795  559515 logs.go:282] 0 containers: []
	W1210 08:43:06.480804  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:43:06.480813  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:43:06.480858  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:43:06.552052  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:43:06.552092  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:43:06.572163  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:43:06.572201  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:43:06.642186  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:43:06.642210  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:43:06.642227  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:43:06.673417  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:43:06.673451  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:43:09.219548  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:43:09.229812  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:43:09.229891  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:43:09.255273  559515 cri.go:89] found id: ""
	I1210 08:43:09.255298  559515 logs.go:282] 0 containers: []
	W1210 08:43:09.255306  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:43:09.255313  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:43:09.255373  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:43:09.282090  559515 cri.go:89] found id: ""
	I1210 08:43:09.282119  559515 logs.go:282] 0 containers: []
	W1210 08:43:09.282129  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:43:09.282135  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:43:09.282194  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:43:09.308037  559515 cri.go:89] found id: ""
	I1210 08:43:09.308067  559515 logs.go:282] 0 containers: []
	W1210 08:43:09.308078  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:43:09.308085  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:43:09.308147  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:43:09.333332  559515 cri.go:89] found id: ""
	I1210 08:43:09.333359  559515 logs.go:282] 0 containers: []
	W1210 08:43:09.333368  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:43:09.333375  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:43:09.333434  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:43:09.358903  559515 cri.go:89] found id: ""
	I1210 08:43:09.358928  559515 logs.go:282] 0 containers: []
	W1210 08:43:09.358936  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:43:09.358942  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:43:09.359004  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:43:09.385762  559515 cri.go:89] found id: ""
	I1210 08:43:09.385796  559515 logs.go:282] 0 containers: []
	W1210 08:43:09.385805  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:43:09.385811  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:43:09.385872  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:43:09.415831  559515 cri.go:89] found id: ""
	I1210 08:43:09.415854  559515 logs.go:282] 0 containers: []
	W1210 08:43:09.415862  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:43:09.415868  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:43:09.415929  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:43:09.445287  559515 cri.go:89] found id: ""
	I1210 08:43:09.445312  559515 logs.go:282] 0 containers: []
	W1210 08:43:09.445322  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:43:09.445330  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:43:09.445341  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:43:09.475261  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:43:09.475291  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:43:09.544144  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:43:09.544182  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:43:09.560812  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:43:09.560840  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:43:09.629127  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:43:09.629150  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:43:09.629164  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:43:12.162447  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:43:12.172604  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:43:12.172676  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:43:12.202212  559515 cri.go:89] found id: ""
	I1210 08:43:12.202236  559515 logs.go:282] 0 containers: []
	W1210 08:43:12.202244  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:43:12.202250  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:43:12.202315  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:43:12.228447  559515 cri.go:89] found id: ""
	I1210 08:43:12.228471  559515 logs.go:282] 0 containers: []
	W1210 08:43:12.228480  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:43:12.228486  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:43:12.228549  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:43:12.258248  559515 cri.go:89] found id: ""
	I1210 08:43:12.258273  559515 logs.go:282] 0 containers: []
	W1210 08:43:12.258283  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:43:12.258289  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:43:12.258348  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:43:12.285120  559515 cri.go:89] found id: ""
	I1210 08:43:12.285156  559515 logs.go:282] 0 containers: []
	W1210 08:43:12.285166  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:43:12.285172  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:43:12.285233  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:43:12.316974  559515 cri.go:89] found id: ""
	I1210 08:43:12.317000  559515 logs.go:282] 0 containers: []
	W1210 08:43:12.317009  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:43:12.317015  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:43:12.317076  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:43:12.343666  559515 cri.go:89] found id: ""
	I1210 08:43:12.343690  559515 logs.go:282] 0 containers: []
	W1210 08:43:12.343698  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:43:12.343705  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:43:12.343774  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:43:12.369916  559515 cri.go:89] found id: ""
	I1210 08:43:12.369939  559515 logs.go:282] 0 containers: []
	W1210 08:43:12.369947  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:43:12.369953  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:43:12.370017  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:43:12.396162  559515 cri.go:89] found id: ""
	I1210 08:43:12.396185  559515 logs.go:282] 0 containers: []
	W1210 08:43:12.396194  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:43:12.396202  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:43:12.396214  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:43:12.463638  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:43:12.463675  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:43:12.480212  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:43:12.480242  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:43:12.544420  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:43:12.544441  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:43:12.544454  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:43:12.575393  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:43:12.575427  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:43:15.106662  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:43:15.117118  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:43:15.117191  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:43:15.143971  559515 cri.go:89] found id: ""
	I1210 08:43:15.143997  559515 logs.go:282] 0 containers: []
	W1210 08:43:15.144007  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:43:15.144013  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:43:15.144077  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:43:15.172924  559515 cri.go:89] found id: ""
	I1210 08:43:15.172949  559515 logs.go:282] 0 containers: []
	W1210 08:43:15.172958  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:43:15.172964  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:43:15.173032  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:43:15.200556  559515 cri.go:89] found id: ""
	I1210 08:43:15.200581  559515 logs.go:282] 0 containers: []
	W1210 08:43:15.200590  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:43:15.200595  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:43:15.200654  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:43:15.225230  559515 cri.go:89] found id: ""
	I1210 08:43:15.225254  559515 logs.go:282] 0 containers: []
	W1210 08:43:15.225263  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:43:15.225269  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:43:15.225328  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:43:15.254452  559515 cri.go:89] found id: ""
	I1210 08:43:15.254476  559515 logs.go:282] 0 containers: []
	W1210 08:43:15.254486  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:43:15.254492  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:43:15.254550  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:43:15.279357  559515 cri.go:89] found id: ""
	I1210 08:43:15.279382  559515 logs.go:282] 0 containers: []
	W1210 08:43:15.279391  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:43:15.279397  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:43:15.279455  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:43:15.303614  559515 cri.go:89] found id: ""
	I1210 08:43:15.303647  559515 logs.go:282] 0 containers: []
	W1210 08:43:15.303660  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:43:15.303667  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:43:15.303731  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:43:15.328412  559515 cri.go:89] found id: ""
	I1210 08:43:15.328434  559515 logs.go:282] 0 containers: []
	W1210 08:43:15.328443  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:43:15.328451  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:43:15.328466  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:43:15.395761  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:43:15.395801  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:43:15.411407  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:43:15.411437  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:43:15.480415  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:43:15.480433  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:43:15.480447  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:43:15.511292  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:43:15.511330  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:43:18.040542  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:43:18.052030  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:43:18.052102  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:43:18.079603  559515 cri.go:89] found id: ""
	I1210 08:43:18.079628  559515 logs.go:282] 0 containers: []
	W1210 08:43:18.079637  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:43:18.079643  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:43:18.079709  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:43:18.107374  559515 cri.go:89] found id: ""
	I1210 08:43:18.107402  559515 logs.go:282] 0 containers: []
	W1210 08:43:18.107412  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:43:18.107418  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:43:18.107482  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:43:18.141215  559515 cri.go:89] found id: ""
	I1210 08:43:18.141241  559515 logs.go:282] 0 containers: []
	W1210 08:43:18.141250  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:43:18.141255  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:43:18.141314  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:43:18.168452  559515 cri.go:89] found id: ""
	I1210 08:43:18.168476  559515 logs.go:282] 0 containers: []
	W1210 08:43:18.168485  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:43:18.168491  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:43:18.168553  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:43:18.194590  559515 cri.go:89] found id: ""
	I1210 08:43:18.194615  559515 logs.go:282] 0 containers: []
	W1210 08:43:18.194625  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:43:18.194631  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:43:18.194690  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:43:18.222361  559515 cri.go:89] found id: ""
	I1210 08:43:18.222386  559515 logs.go:282] 0 containers: []
	W1210 08:43:18.222395  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:43:18.222402  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:43:18.222462  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:43:18.247951  559515 cri.go:89] found id: ""
	I1210 08:43:18.247974  559515 logs.go:282] 0 containers: []
	W1210 08:43:18.247984  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:43:18.247990  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:43:18.248050  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:43:18.278679  559515 cri.go:89] found id: ""
	I1210 08:43:18.278708  559515 logs.go:282] 0 containers: []
	W1210 08:43:18.278718  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:43:18.278726  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:43:18.278738  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:43:18.295660  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:43:18.295689  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:43:18.363722  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:43:18.363746  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:43:18.363759  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:43:18.395183  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:43:18.395222  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:43:18.430264  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:43:18.430292  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:43:21.002872  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:43:21.015753  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:43:21.015840  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:43:21.042822  559515 cri.go:89] found id: ""
	I1210 08:43:21.042844  559515 logs.go:282] 0 containers: []
	W1210 08:43:21.042853  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:43:21.042859  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:43:21.042920  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:43:21.068998  559515 cri.go:89] found id: ""
	I1210 08:43:21.069024  559515 logs.go:282] 0 containers: []
	W1210 08:43:21.069032  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:43:21.069039  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:43:21.069099  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:43:21.095273  559515 cri.go:89] found id: ""
	I1210 08:43:21.095299  559515 logs.go:282] 0 containers: []
	W1210 08:43:21.095308  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:43:21.095314  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:43:21.095374  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:43:21.122207  559515 cri.go:89] found id: ""
	I1210 08:43:21.122233  559515 logs.go:282] 0 containers: []
	W1210 08:43:21.122242  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:43:21.122249  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:43:21.122314  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:43:21.147661  559515 cri.go:89] found id: ""
	I1210 08:43:21.147688  559515 logs.go:282] 0 containers: []
	W1210 08:43:21.147696  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:43:21.147703  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:43:21.147764  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:43:21.174905  559515 cri.go:89] found id: ""
	I1210 08:43:21.174931  559515 logs.go:282] 0 containers: []
	W1210 08:43:21.174940  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:43:21.174946  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:43:21.175004  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:43:21.200993  559515 cri.go:89] found id: ""
	I1210 08:43:21.201020  559515 logs.go:282] 0 containers: []
	W1210 08:43:21.201029  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:43:21.201035  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:43:21.201095  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:43:21.225449  559515 cri.go:89] found id: ""
	I1210 08:43:21.225475  559515 logs.go:282] 0 containers: []
	W1210 08:43:21.225484  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:43:21.225493  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:43:21.225508  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:43:21.240970  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:43:21.240998  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:43:21.302696  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:43:21.302722  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:43:21.302736  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:43:21.333295  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:43:21.333326  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:43:21.368407  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:43:21.368436  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:43:23.938220  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:43:23.952297  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:43:23.952369  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:43:24.019927  559515 cri.go:89] found id: ""
	I1210 08:43:24.019958  559515 logs.go:282] 0 containers: []
	W1210 08:43:24.019967  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:43:24.019974  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:43:24.020036  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:43:24.058550  559515 cri.go:89] found id: ""
	I1210 08:43:24.058580  559515 logs.go:282] 0 containers: []
	W1210 08:43:24.058589  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:43:24.058595  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:43:24.058660  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:43:24.088100  559515 cri.go:89] found id: ""
	I1210 08:43:24.088123  559515 logs.go:282] 0 containers: []
	W1210 08:43:24.088132  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:43:24.088138  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:43:24.088196  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:43:24.125989  559515 cri.go:89] found id: ""
	I1210 08:43:24.126013  559515 logs.go:282] 0 containers: []
	W1210 08:43:24.126021  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:43:24.126027  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:43:24.126086  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:43:24.153958  559515 cri.go:89] found id: ""
	I1210 08:43:24.153980  559515 logs.go:282] 0 containers: []
	W1210 08:43:24.153988  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:43:24.153994  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:43:24.154053  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:43:24.197358  559515 cri.go:89] found id: ""
	I1210 08:43:24.197379  559515 logs.go:282] 0 containers: []
	W1210 08:43:24.197387  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:43:24.197393  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:43:24.197449  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:43:24.223648  559515 cri.go:89] found id: ""
	I1210 08:43:24.223669  559515 logs.go:282] 0 containers: []
	W1210 08:43:24.223678  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:43:24.223684  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:43:24.223748  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:43:24.249356  559515 cri.go:89] found id: ""
	I1210 08:43:24.249377  559515 logs.go:282] 0 containers: []
	W1210 08:43:24.249386  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:43:24.249395  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:43:24.249407  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:43:24.320629  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:43:24.320664  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:43:24.336518  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:43:24.336549  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:43:24.404054  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:43:24.404074  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:43:24.404087  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:43:24.435597  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:43:24.435637  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:43:26.967710  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:43:26.995609  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:43:26.995691  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:43:27.033973  559515 cri.go:89] found id: ""
	I1210 08:43:27.034003  559515 logs.go:282] 0 containers: []
	W1210 08:43:27.034013  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:43:27.034019  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:43:27.034081  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:43:27.064200  559515 cri.go:89] found id: ""
	I1210 08:43:27.064228  559515 logs.go:282] 0 containers: []
	W1210 08:43:27.064237  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:43:27.064243  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:43:27.064305  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:43:27.093682  559515 cri.go:89] found id: ""
	I1210 08:43:27.093710  559515 logs.go:282] 0 containers: []
	W1210 08:43:27.093719  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:43:27.093725  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:43:27.093785  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:43:27.132447  559515 cri.go:89] found id: ""
	I1210 08:43:27.132476  559515 logs.go:282] 0 containers: []
	W1210 08:43:27.132485  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:43:27.132491  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:43:27.132559  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:43:27.174547  559515 cri.go:89] found id: ""
	I1210 08:43:27.174574  559515 logs.go:282] 0 containers: []
	W1210 08:43:27.174583  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:43:27.174589  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:43:27.174648  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:43:27.209200  559515 cri.go:89] found id: ""
	I1210 08:43:27.209228  559515 logs.go:282] 0 containers: []
	W1210 08:43:27.209237  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:43:27.209248  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:43:27.209327  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:43:27.241387  559515 cri.go:89] found id: ""
	I1210 08:43:27.241408  559515 logs.go:282] 0 containers: []
	W1210 08:43:27.241417  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:43:27.241423  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:43:27.241481  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:43:27.276560  559515 cri.go:89] found id: ""
	I1210 08:43:27.276583  559515 logs.go:282] 0 containers: []
	W1210 08:43:27.276591  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:43:27.276601  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:43:27.276613  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:43:27.315610  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:43:27.315691  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:43:27.350794  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:43:27.350862  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:43:27.426454  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:43:27.426524  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:43:27.446309  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:43:27.446389  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:43:27.539724  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:43:30.040619  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:43:30.053042  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:43:30.053138  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:43:30.082501  559515 cri.go:89] found id: ""
	I1210 08:43:30.082526  559515 logs.go:282] 0 containers: []
	W1210 08:43:30.082535  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:43:30.082541  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:43:30.082610  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:43:30.110766  559515 cri.go:89] found id: ""
	I1210 08:43:30.110791  559515 logs.go:282] 0 containers: []
	W1210 08:43:30.110800  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:43:30.110807  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:43:30.110871  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:43:30.138135  559515 cri.go:89] found id: ""
	I1210 08:43:30.138215  559515 logs.go:282] 0 containers: []
	W1210 08:43:30.138239  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:43:30.138257  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:43:30.138350  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:43:30.168717  559515 cri.go:89] found id: ""
	I1210 08:43:30.168740  559515 logs.go:282] 0 containers: []
	W1210 08:43:30.168749  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:43:30.168755  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:43:30.168813  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:43:30.197750  559515 cri.go:89] found id: ""
	I1210 08:43:30.197773  559515 logs.go:282] 0 containers: []
	W1210 08:43:30.197782  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:43:30.197788  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:43:30.197852  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:43:30.234608  559515 cri.go:89] found id: ""
	I1210 08:43:30.234630  559515 logs.go:282] 0 containers: []
	W1210 08:43:30.234638  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:43:30.234645  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:43:30.234703  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:43:30.265001  559515 cri.go:89] found id: ""
	I1210 08:43:30.265080  559515 logs.go:282] 0 containers: []
	W1210 08:43:30.265103  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:43:30.265123  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:43:30.265219  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:43:30.296749  559515 cri.go:89] found id: ""
	I1210 08:43:30.296825  559515 logs.go:282] 0 containers: []
	W1210 08:43:30.296847  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:43:30.296869  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:43:30.296905  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:43:30.384103  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:43:30.384438  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:43:30.402779  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:43:30.402863  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:43:30.492598  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:43:30.492660  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:43:30.492687  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:43:30.531060  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:43:30.531088  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:43:33.074420  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:43:33.085441  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:43:33.085519  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:43:33.111894  559515 cri.go:89] found id: ""
	I1210 08:43:33.111919  559515 logs.go:282] 0 containers: []
	W1210 08:43:33.111928  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:43:33.111935  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:43:33.112000  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:43:33.138707  559515 cri.go:89] found id: ""
	I1210 08:43:33.138732  559515 logs.go:282] 0 containers: []
	W1210 08:43:33.138741  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:43:33.138754  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:43:33.138815  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:43:33.168616  559515 cri.go:89] found id: ""
	I1210 08:43:33.168642  559515 logs.go:282] 0 containers: []
	W1210 08:43:33.168651  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:43:33.168657  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:43:33.168730  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:43:33.196921  559515 cri.go:89] found id: ""
	I1210 08:43:33.196945  559515 logs.go:282] 0 containers: []
	W1210 08:43:33.196954  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:43:33.196960  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:43:33.197022  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:43:33.223312  559515 cri.go:89] found id: ""
	I1210 08:43:33.223339  559515 logs.go:282] 0 containers: []
	W1210 08:43:33.223348  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:43:33.223354  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:43:33.223414  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:43:33.250290  559515 cri.go:89] found id: ""
	I1210 08:43:33.250315  559515 logs.go:282] 0 containers: []
	W1210 08:43:33.250324  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:43:33.250330  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:43:33.250389  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:43:33.278569  559515 cri.go:89] found id: ""
	I1210 08:43:33.278594  559515 logs.go:282] 0 containers: []
	W1210 08:43:33.278603  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:43:33.278609  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:43:33.278669  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:43:33.306326  559515 cri.go:89] found id: ""
	I1210 08:43:33.306351  559515 logs.go:282] 0 containers: []
	W1210 08:43:33.306365  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:43:33.306374  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:43:33.306386  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:43:33.378229  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:43:33.378276  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:43:33.395324  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:43:33.395362  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:43:33.462064  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:43:33.462085  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:43:33.462098  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:43:33.492614  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:43:33.492651  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:43:36.026918  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:43:36.038938  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:43:36.039034  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:43:36.071496  559515 cri.go:89] found id: ""
	I1210 08:43:36.071522  559515 logs.go:282] 0 containers: []
	W1210 08:43:36.071531  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:43:36.071537  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:43:36.071598  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:43:36.097944  559515 cri.go:89] found id: ""
	I1210 08:43:36.097967  559515 logs.go:282] 0 containers: []
	W1210 08:43:36.097975  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:43:36.097981  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:43:36.098042  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:43:36.124157  559515 cri.go:89] found id: ""
	I1210 08:43:36.124187  559515 logs.go:282] 0 containers: []
	W1210 08:43:36.124197  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:43:36.124203  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:43:36.124281  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:43:36.149779  559515 cri.go:89] found id: ""
	I1210 08:43:36.149805  559515 logs.go:282] 0 containers: []
	W1210 08:43:36.149814  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:43:36.149820  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:43:36.149883  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:43:36.178130  559515 cri.go:89] found id: ""
	I1210 08:43:36.178156  559515 logs.go:282] 0 containers: []
	W1210 08:43:36.178166  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:43:36.178171  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:43:36.178231  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:43:36.204991  559515 cri.go:89] found id: ""
	I1210 08:43:36.205023  559515 logs.go:282] 0 containers: []
	W1210 08:43:36.205034  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:43:36.205043  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:43:36.205119  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:43:36.230248  559515 cri.go:89] found id: ""
	I1210 08:43:36.230274  559515 logs.go:282] 0 containers: []
	W1210 08:43:36.230283  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:43:36.230289  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:43:36.230349  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:43:36.258251  559515 cri.go:89] found id: ""
	I1210 08:43:36.258278  559515 logs.go:282] 0 containers: []
	W1210 08:43:36.258293  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:43:36.258307  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:43:36.258319  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:43:36.329240  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:43:36.329260  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:43:36.329272  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:43:36.361064  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:43:36.361101  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:43:36.388845  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:43:36.388873  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:43:36.460825  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:43:36.460863  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:43:38.978172  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:43:38.989053  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:43:38.989123  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:43:39.023994  559515 cri.go:89] found id: ""
	I1210 08:43:39.024018  559515 logs.go:282] 0 containers: []
	W1210 08:43:39.024027  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:43:39.024033  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:43:39.024096  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:43:39.052321  559515 cri.go:89] found id: ""
	I1210 08:43:39.052345  559515 logs.go:282] 0 containers: []
	W1210 08:43:39.052353  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:43:39.052359  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:43:39.052421  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:43:39.082558  559515 cri.go:89] found id: ""
	I1210 08:43:39.082580  559515 logs.go:282] 0 containers: []
	W1210 08:43:39.082589  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:43:39.082595  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:43:39.082655  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:43:39.112033  559515 cri.go:89] found id: ""
	I1210 08:43:39.112059  559515 logs.go:282] 0 containers: []
	W1210 08:43:39.112068  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:43:39.112074  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:43:39.112134  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:43:39.137295  559515 cri.go:89] found id: ""
	I1210 08:43:39.137318  559515 logs.go:282] 0 containers: []
	W1210 08:43:39.137328  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:43:39.137334  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:43:39.137395  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:43:39.163998  559515 cri.go:89] found id: ""
	I1210 08:43:39.164021  559515 logs.go:282] 0 containers: []
	W1210 08:43:39.164030  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:43:39.164036  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:43:39.164095  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:43:39.190389  559515 cri.go:89] found id: ""
	I1210 08:43:39.190411  559515 logs.go:282] 0 containers: []
	W1210 08:43:39.190419  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:43:39.190425  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:43:39.190485  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:43:39.215856  559515 cri.go:89] found id: ""
	I1210 08:43:39.215878  559515 logs.go:282] 0 containers: []
	W1210 08:43:39.215887  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:43:39.215895  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:43:39.215906  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:43:39.283347  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:43:39.283386  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:43:39.299772  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:43:39.299804  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:43:39.366351  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:43:39.366416  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:43:39.366446  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:43:39.397719  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:43:39.397752  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:43:41.927152  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:43:41.939718  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:43:41.939788  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:43:41.973480  559515 cri.go:89] found id: ""
	I1210 08:43:41.973503  559515 logs.go:282] 0 containers: []
	W1210 08:43:41.973511  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:43:41.973517  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:43:41.973585  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:43:41.999426  559515 cri.go:89] found id: ""
	I1210 08:43:41.999452  559515 logs.go:282] 0 containers: []
	W1210 08:43:41.999461  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:43:41.999467  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:43:41.999527  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:43:42.030665  559515 cri.go:89] found id: ""
	I1210 08:43:42.030693  559515 logs.go:282] 0 containers: []
	W1210 08:43:42.030702  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:43:42.030710  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:43:42.030778  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:43:42.058563  559515 cri.go:89] found id: ""
	I1210 08:43:42.058591  559515 logs.go:282] 0 containers: []
	W1210 08:43:42.058599  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:43:42.058606  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:43:42.058671  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:43:42.089879  559515 cri.go:89] found id: ""
	I1210 08:43:42.089957  559515 logs.go:282] 0 containers: []
	W1210 08:43:42.089973  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:43:42.089980  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:43:42.090059  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:43:42.123218  559515 cri.go:89] found id: ""
	I1210 08:43:42.123253  559515 logs.go:282] 0 containers: []
	W1210 08:43:42.123269  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:43:42.123301  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:43:42.123399  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:43:42.158915  559515 cri.go:89] found id: ""
	I1210 08:43:42.158941  559515 logs.go:282] 0 containers: []
	W1210 08:43:42.158950  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:43:42.158961  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:43:42.159079  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:43:42.194137  559515 cri.go:89] found id: ""
	I1210 08:43:42.194164  559515 logs.go:282] 0 containers: []
	W1210 08:43:42.194174  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:43:42.194184  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:43:42.194218  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:43:42.266964  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:43:42.266987  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:43:42.267044  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:43:42.300600  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:43:42.300639  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:43:42.336988  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:43:42.337017  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:43:42.406461  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:43:42.406499  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:43:44.923497  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:43:44.936786  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:43:44.936860  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:43:45.005354  559515 cri.go:89] found id: ""
	I1210 08:43:45.005386  559515 logs.go:282] 0 containers: []
	W1210 08:43:45.005399  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:43:45.005406  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:43:45.005479  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:43:45.057809  559515 cri.go:89] found id: ""
	I1210 08:43:45.057843  559515 logs.go:282] 0 containers: []
	W1210 08:43:45.057853  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:43:45.057860  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:43:45.058000  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:43:45.100599  559515 cri.go:89] found id: ""
	I1210 08:43:45.100630  559515 logs.go:282] 0 containers: []
	W1210 08:43:45.100640  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:43:45.100647  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:43:45.100759  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:43:45.157599  559515 cri.go:89] found id: ""
	I1210 08:43:45.157634  559515 logs.go:282] 0 containers: []
	W1210 08:43:45.157646  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:43:45.157654  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:43:45.157787  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:43:45.214500  559515 cri.go:89] found id: ""
	I1210 08:43:45.214528  559515 logs.go:282] 0 containers: []
	W1210 08:43:45.214580  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:43:45.214659  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:43:45.214991  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:43:45.268119  559515 cri.go:89] found id: ""
	I1210 08:43:45.268148  559515 logs.go:282] 0 containers: []
	W1210 08:43:45.268157  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:43:45.268163  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:43:45.268269  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:43:45.303895  559515 cri.go:89] found id: ""
	I1210 08:43:45.303969  559515 logs.go:282] 0 containers: []
	W1210 08:43:45.304000  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:43:45.304019  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:43:45.304104  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:43:45.348533  559515 cri.go:89] found id: ""
	I1210 08:43:45.348607  559515 logs.go:282] 0 containers: []
	W1210 08:43:45.348640  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:43:45.348661  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:43:45.348704  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:43:45.382696  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:43:45.382726  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:43:45.420227  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:43:45.420252  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:43:45.513671  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:43:45.513754  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:43:45.541109  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:43:45.541136  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:43:45.642892  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:43:48.144497  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:43:48.155361  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:43:48.155436  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:43:48.185667  559515 cri.go:89] found id: ""
	I1210 08:43:48.185697  559515 logs.go:282] 0 containers: []
	W1210 08:43:48.185705  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:43:48.185712  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:43:48.185773  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:43:48.211621  559515 cri.go:89] found id: ""
	I1210 08:43:48.211646  559515 logs.go:282] 0 containers: []
	W1210 08:43:48.211655  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:43:48.211661  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:43:48.211719  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:43:48.236977  559515 cri.go:89] found id: ""
	I1210 08:43:48.237008  559515 logs.go:282] 0 containers: []
	W1210 08:43:48.237017  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:43:48.237024  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:43:48.237088  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:43:48.267349  559515 cri.go:89] found id: ""
	I1210 08:43:48.267374  559515 logs.go:282] 0 containers: []
	W1210 08:43:48.267382  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:43:48.267388  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:43:48.267448  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:43:48.294950  559515 cri.go:89] found id: ""
	I1210 08:43:48.294977  559515 logs.go:282] 0 containers: []
	W1210 08:43:48.294986  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:43:48.294992  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:43:48.295078  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:43:48.322702  559515 cri.go:89] found id: ""
	I1210 08:43:48.322726  559515 logs.go:282] 0 containers: []
	W1210 08:43:48.322735  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:43:48.322742  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:43:48.322805  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:43:48.350494  559515 cri.go:89] found id: ""
	I1210 08:43:48.350518  559515 logs.go:282] 0 containers: []
	W1210 08:43:48.350527  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:43:48.350532  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:43:48.350595  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:43:48.376370  559515 cri.go:89] found id: ""
	I1210 08:43:48.376394  559515 logs.go:282] 0 containers: []
	W1210 08:43:48.376402  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:43:48.376411  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:43:48.376422  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:43:48.444446  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:43:48.444483  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:43:48.460612  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:43:48.460642  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:43:48.526987  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:43:48.527007  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:43:48.527040  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:43:48.557759  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:43:48.557792  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:43:51.087795  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:43:51.098456  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:43:51.098531  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:43:51.124412  559515 cri.go:89] found id: ""
	I1210 08:43:51.124436  559515 logs.go:282] 0 containers: []
	W1210 08:43:51.124451  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:43:51.124460  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:43:51.124519  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:43:51.150864  559515 cri.go:89] found id: ""
	I1210 08:43:51.150890  559515 logs.go:282] 0 containers: []
	W1210 08:43:51.150899  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:43:51.150905  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:43:51.150968  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:43:51.176119  559515 cri.go:89] found id: ""
	I1210 08:43:51.176143  559515 logs.go:282] 0 containers: []
	W1210 08:43:51.176151  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:43:51.176165  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:43:51.176229  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:43:51.204661  559515 cri.go:89] found id: ""
	I1210 08:43:51.204688  559515 logs.go:282] 0 containers: []
	W1210 08:43:51.204697  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:43:51.204703  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:43:51.204767  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:43:51.230166  559515 cri.go:89] found id: ""
	I1210 08:43:51.230193  559515 logs.go:282] 0 containers: []
	W1210 08:43:51.230202  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:43:51.230207  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:43:51.230271  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:43:51.256003  559515 cri.go:89] found id: ""
	I1210 08:43:51.256028  559515 logs.go:282] 0 containers: []
	W1210 08:43:51.256037  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:43:51.256043  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:43:51.256134  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:43:51.281761  559515 cri.go:89] found id: ""
	I1210 08:43:51.281786  559515 logs.go:282] 0 containers: []
	W1210 08:43:51.281795  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:43:51.281801  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:43:51.281882  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:43:51.308211  559515 cri.go:89] found id: ""
	I1210 08:43:51.308235  559515 logs.go:282] 0 containers: []
	W1210 08:43:51.308244  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:43:51.308254  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:43:51.308294  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:43:51.376331  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:43:51.376369  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:43:51.393019  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:43:51.393051  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:43:51.459030  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:43:51.459052  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:43:51.459065  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:43:51.490761  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:43:51.490796  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:43:54.021365  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:43:54.032221  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:43:54.032298  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:43:54.059880  559515 cri.go:89] found id: ""
	I1210 08:43:54.059903  559515 logs.go:282] 0 containers: []
	W1210 08:43:54.059912  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:43:54.059918  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:43:54.059979  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:43:54.090294  559515 cri.go:89] found id: ""
	I1210 08:43:54.090320  559515 logs.go:282] 0 containers: []
	W1210 08:43:54.090329  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:43:54.090335  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:43:54.090399  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:43:54.117922  559515 cri.go:89] found id: ""
	I1210 08:43:54.117948  559515 logs.go:282] 0 containers: []
	W1210 08:43:54.117957  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:43:54.117963  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:43:54.118024  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:43:54.150693  559515 cri.go:89] found id: ""
	I1210 08:43:54.150718  559515 logs.go:282] 0 containers: []
	W1210 08:43:54.150727  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:43:54.150733  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:43:54.150790  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:43:54.175971  559515 cri.go:89] found id: ""
	I1210 08:43:54.175994  559515 logs.go:282] 0 containers: []
	W1210 08:43:54.176003  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:43:54.176009  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:43:54.176065  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:43:54.201699  559515 cri.go:89] found id: ""
	I1210 08:43:54.201725  559515 logs.go:282] 0 containers: []
	W1210 08:43:54.201734  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:43:54.201740  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:43:54.201799  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:43:54.226161  559515 cri.go:89] found id: ""
	I1210 08:43:54.226185  559515 logs.go:282] 0 containers: []
	W1210 08:43:54.226194  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:43:54.226199  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:43:54.226261  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:43:54.251990  559515 cri.go:89] found id: ""
	I1210 08:43:54.252030  559515 logs.go:282] 0 containers: []
	W1210 08:43:54.252039  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:43:54.252049  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:43:54.252066  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:43:54.320581  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:43:54.320616  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:43:54.336782  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:43:54.336810  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:43:54.403253  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:43:54.403271  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:43:54.403284  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:43:54.435430  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:43:54.435466  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:43:56.964321  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:43:56.974477  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:43:56.974548  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:43:57.009691  559515 cri.go:89] found id: ""
	I1210 08:43:57.009717  559515 logs.go:282] 0 containers: []
	W1210 08:43:57.009725  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:43:57.009731  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:43:57.009795  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:43:57.036317  559515 cri.go:89] found id: ""
	I1210 08:43:57.036343  559515 logs.go:282] 0 containers: []
	W1210 08:43:57.036351  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:43:57.036357  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:43:57.036417  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:43:57.063511  559515 cri.go:89] found id: ""
	I1210 08:43:57.063534  559515 logs.go:282] 0 containers: []
	W1210 08:43:57.063543  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:43:57.063549  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:43:57.063605  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:43:57.090025  559515 cri.go:89] found id: ""
	I1210 08:43:57.090051  559515 logs.go:282] 0 containers: []
	W1210 08:43:57.090060  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:43:57.090066  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:43:57.090131  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:43:57.117121  559515 cri.go:89] found id: ""
	I1210 08:43:57.117145  559515 logs.go:282] 0 containers: []
	W1210 08:43:57.117153  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:43:57.117159  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:43:57.117248  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:43:57.142257  559515 cri.go:89] found id: ""
	I1210 08:43:57.142283  559515 logs.go:282] 0 containers: []
	W1210 08:43:57.142293  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:43:57.142299  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:43:57.142386  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:43:57.168404  559515 cri.go:89] found id: ""
	I1210 08:43:57.168440  559515 logs.go:282] 0 containers: []
	W1210 08:43:57.168450  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:43:57.168456  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:43:57.168518  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:43:57.193100  559515 cri.go:89] found id: ""
	I1210 08:43:57.193124  559515 logs.go:282] 0 containers: []
	W1210 08:43:57.193133  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:43:57.193168  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:43:57.193186  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:43:57.265178  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:43:57.265214  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:43:57.281267  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:43:57.281297  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:43:57.349324  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:43:57.349346  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:43:57.349359  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:43:57.380661  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:43:57.380697  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:43:59.911133  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:43:59.922494  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:43:59.922564  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:43:59.957155  559515 cri.go:89] found id: ""
	I1210 08:43:59.957177  559515 logs.go:282] 0 containers: []
	W1210 08:43:59.957186  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:43:59.957192  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:43:59.957252  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:43:59.986107  559515 cri.go:89] found id: ""
	I1210 08:43:59.986135  559515 logs.go:282] 0 containers: []
	W1210 08:43:59.986144  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:43:59.986151  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:43:59.986211  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:44:00.084705  559515 cri.go:89] found id: ""
	I1210 08:44:00.084735  559515 logs.go:282] 0 containers: []
	W1210 08:44:00.084746  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:44:00.084752  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:44:00.084837  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:44:00.261249  559515 cri.go:89] found id: ""
	I1210 08:44:00.261278  559515 logs.go:282] 0 containers: []
	W1210 08:44:00.261288  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:44:00.261294  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:44:00.261369  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:44:00.342981  559515 cri.go:89] found id: ""
	I1210 08:44:00.343021  559515 logs.go:282] 0 containers: []
	W1210 08:44:00.343032  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:44:00.343040  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:44:00.343114  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:44:00.437687  559515 cri.go:89] found id: ""
	I1210 08:44:00.437772  559515 logs.go:282] 0 containers: []
	W1210 08:44:00.437800  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:44:00.437820  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:44:00.437950  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:44:00.491630  559515 cri.go:89] found id: ""
	I1210 08:44:00.491654  559515 logs.go:282] 0 containers: []
	W1210 08:44:00.491664  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:44:00.491671  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:44:00.491740  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:44:00.548152  559515 cri.go:89] found id: ""
	I1210 08:44:00.548179  559515 logs.go:282] 0 containers: []
	W1210 08:44:00.548191  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:44:00.548202  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:44:00.548215  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:44:00.625737  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:44:00.625761  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:44:00.625778  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:44:00.658686  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:44:00.658725  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:44:00.697155  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:44:00.697185  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:44:00.778765  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:44:00.778804  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:44:03.295555  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:44:03.305336  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:44:03.305408  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:44:03.330395  559515 cri.go:89] found id: ""
	I1210 08:44:03.330419  559515 logs.go:282] 0 containers: []
	W1210 08:44:03.330428  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:44:03.330435  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:44:03.330495  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:44:03.355277  559515 cri.go:89] found id: ""
	I1210 08:44:03.355302  559515 logs.go:282] 0 containers: []
	W1210 08:44:03.355311  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:44:03.355317  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:44:03.355376  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:44:03.380850  559515 cri.go:89] found id: ""
	I1210 08:44:03.380873  559515 logs.go:282] 0 containers: []
	W1210 08:44:03.380882  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:44:03.380888  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:44:03.380948  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:44:03.406325  559515 cri.go:89] found id: ""
	I1210 08:44:03.406348  559515 logs.go:282] 0 containers: []
	W1210 08:44:03.406356  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:44:03.406362  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:44:03.406418  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:44:03.432400  559515 cri.go:89] found id: ""
	I1210 08:44:03.432427  559515 logs.go:282] 0 containers: []
	W1210 08:44:03.432436  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:44:03.432442  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:44:03.432499  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:44:03.461844  559515 cri.go:89] found id: ""
	I1210 08:44:03.461869  559515 logs.go:282] 0 containers: []
	W1210 08:44:03.461886  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:44:03.461893  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:44:03.461957  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:44:03.489418  559515 cri.go:89] found id: ""
	I1210 08:44:03.489442  559515 logs.go:282] 0 containers: []
	W1210 08:44:03.489452  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:44:03.489458  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:44:03.489522  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:44:03.517651  559515 cri.go:89] found id: ""
	I1210 08:44:03.517676  559515 logs.go:282] 0 containers: []
	W1210 08:44:03.517685  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:44:03.517693  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:44:03.517707  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:44:03.585773  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:44:03.585812  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:44:03.601961  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:44:03.601989  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:44:03.678893  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:44:03.678914  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:44:03.678927  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:44:03.716897  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:44:03.716992  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:44:06.261345  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:44:06.272106  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:44:06.272172  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:44:06.301618  559515 cri.go:89] found id: ""
	I1210 08:44:06.301639  559515 logs.go:282] 0 containers: []
	W1210 08:44:06.301647  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:44:06.301653  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:44:06.301713  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:44:06.337708  559515 cri.go:89] found id: ""
	I1210 08:44:06.337731  559515 logs.go:282] 0 containers: []
	W1210 08:44:06.337740  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:44:06.337746  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:44:06.337809  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:44:06.372812  559515 cri.go:89] found id: ""
	I1210 08:44:06.372838  559515 logs.go:282] 0 containers: []
	W1210 08:44:06.372847  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:44:06.372853  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:44:06.372918  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:44:06.428985  559515 cri.go:89] found id: ""
	I1210 08:44:06.429006  559515 logs.go:282] 0 containers: []
	W1210 08:44:06.429015  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:44:06.429021  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:44:06.429080  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:44:06.466719  559515 cri.go:89] found id: ""
	I1210 08:44:06.466746  559515 logs.go:282] 0 containers: []
	W1210 08:44:06.466757  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:44:06.466764  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:44:06.466824  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:44:06.505659  559515 cri.go:89] found id: ""
	I1210 08:44:06.505685  559515 logs.go:282] 0 containers: []
	W1210 08:44:06.505695  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:44:06.505708  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:44:06.505776  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:44:06.538353  559515 cri.go:89] found id: ""
	I1210 08:44:06.538380  559515 logs.go:282] 0 containers: []
	W1210 08:44:06.538389  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:44:06.538395  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:44:06.538453  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:44:06.576300  559515 cri.go:89] found id: ""
	I1210 08:44:06.576328  559515 logs.go:282] 0 containers: []
	W1210 08:44:06.576338  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:44:06.576346  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:44:06.576357  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:44:06.652481  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:44:06.652517  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:44:06.672650  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:44:06.672679  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:44:06.753540  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:44:06.753559  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:44:06.753572  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:44:06.786636  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:44:06.786674  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:44:09.323677  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:44:09.337481  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:44:09.337572  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:44:09.376362  559515 cri.go:89] found id: ""
	I1210 08:44:09.376389  559515 logs.go:282] 0 containers: []
	W1210 08:44:09.376398  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:44:09.376404  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:44:09.376461  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:44:09.410475  559515 cri.go:89] found id: ""
	I1210 08:44:09.410503  559515 logs.go:282] 0 containers: []
	W1210 08:44:09.410512  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:44:09.410518  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:44:09.410579  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:44:09.446623  559515 cri.go:89] found id: ""
	I1210 08:44:09.446652  559515 logs.go:282] 0 containers: []
	W1210 08:44:09.446660  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:44:09.446666  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:44:09.446727  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:44:09.483949  559515 cri.go:89] found id: ""
	I1210 08:44:09.483982  559515 logs.go:282] 0 containers: []
	W1210 08:44:09.484003  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:44:09.484009  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:44:09.484091  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:44:09.528001  559515 cri.go:89] found id: ""
	I1210 08:44:09.528023  559515 logs.go:282] 0 containers: []
	W1210 08:44:09.528031  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:44:09.528037  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:44:09.528096  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:44:09.569104  559515 cri.go:89] found id: ""
	I1210 08:44:09.569147  559515 logs.go:282] 0 containers: []
	W1210 08:44:09.569157  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:44:09.569163  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:44:09.569236  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:44:09.632230  559515 cri.go:89] found id: ""
	I1210 08:44:09.632294  559515 logs.go:282] 0 containers: []
	W1210 08:44:09.632319  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:44:09.632338  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:44:09.632415  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:44:09.666638  559515 cri.go:89] found id: ""
	I1210 08:44:09.666669  559515 logs.go:282] 0 containers: []
	W1210 08:44:09.666678  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:44:09.666687  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:44:09.666698  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:44:09.782762  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:44:09.782845  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:44:09.799605  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:44:09.799677  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:44:09.886557  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:44:09.886619  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:44:09.886649  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:44:09.921828  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:44:09.921910  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:44:12.465272  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:44:12.475422  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:44:12.475496  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:44:12.501663  559515 cri.go:89] found id: ""
	I1210 08:44:12.501688  559515 logs.go:282] 0 containers: []
	W1210 08:44:12.501696  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:44:12.501702  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:44:12.501763  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:44:12.528173  559515 cri.go:89] found id: ""
	I1210 08:44:12.528197  559515 logs.go:282] 0 containers: []
	W1210 08:44:12.528206  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:44:12.528212  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:44:12.528279  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:44:12.554215  559515 cri.go:89] found id: ""
	I1210 08:44:12.554237  559515 logs.go:282] 0 containers: []
	W1210 08:44:12.554246  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:44:12.554252  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:44:12.554315  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:44:12.580220  559515 cri.go:89] found id: ""
	I1210 08:44:12.580244  559515 logs.go:282] 0 containers: []
	W1210 08:44:12.580253  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:44:12.580259  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:44:12.580343  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:44:12.606377  559515 cri.go:89] found id: ""
	I1210 08:44:12.606400  559515 logs.go:282] 0 containers: []
	W1210 08:44:12.606409  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:44:12.606416  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:44:12.606474  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:44:12.638965  559515 cri.go:89] found id: ""
	I1210 08:44:12.638990  559515 logs.go:282] 0 containers: []
	W1210 08:44:12.638999  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:44:12.639034  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:44:12.639116  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:44:12.674053  559515 cri.go:89] found id: ""
	I1210 08:44:12.674090  559515 logs.go:282] 0 containers: []
	W1210 08:44:12.674099  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:44:12.674105  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:44:12.674180  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:44:12.709122  559515 cri.go:89] found id: ""
	I1210 08:44:12.709151  559515 logs.go:282] 0 containers: []
	W1210 08:44:12.709159  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:44:12.709168  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:44:12.709179  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:44:12.798585  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:44:12.798634  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:44:12.819802  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:44:12.819834  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:44:12.916269  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:44:12.916290  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:44:12.916303  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:44:12.971849  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:44:12.971928  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:44:15.540984  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:44:15.550887  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:44:15.550954  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:44:15.576767  559515 cri.go:89] found id: ""
	I1210 08:44:15.576795  559515 logs.go:282] 0 containers: []
	W1210 08:44:15.576804  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:44:15.576810  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:44:15.576871  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:44:15.606135  559515 cri.go:89] found id: ""
	I1210 08:44:15.606158  559515 logs.go:282] 0 containers: []
	W1210 08:44:15.606167  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:44:15.606173  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:44:15.606233  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:44:15.633066  559515 cri.go:89] found id: ""
	I1210 08:44:15.633092  559515 logs.go:282] 0 containers: []
	W1210 08:44:15.633100  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:44:15.633106  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:44:15.633168  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:44:15.661439  559515 cri.go:89] found id: ""
	I1210 08:44:15.661478  559515 logs.go:282] 0 containers: []
	W1210 08:44:15.661487  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:44:15.661493  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:44:15.661552  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:44:15.687579  559515 cri.go:89] found id: ""
	I1210 08:44:15.687602  559515 logs.go:282] 0 containers: []
	W1210 08:44:15.687611  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:44:15.687617  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:44:15.687674  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:44:15.713097  559515 cri.go:89] found id: ""
	I1210 08:44:15.713124  559515 logs.go:282] 0 containers: []
	W1210 08:44:15.713134  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:44:15.713140  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:44:15.713198  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:44:15.739076  559515 cri.go:89] found id: ""
	I1210 08:44:15.739101  559515 logs.go:282] 0 containers: []
	W1210 08:44:15.739109  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:44:15.739115  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:44:15.739174  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:44:15.764047  559515 cri.go:89] found id: ""
	I1210 08:44:15.764070  559515 logs.go:282] 0 containers: []
	W1210 08:44:15.764079  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:44:15.764087  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:44:15.764099  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:44:15.833463  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:44:15.833498  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:44:15.850873  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:44:15.850902  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:44:15.936690  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:44:15.936771  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:44:15.936799  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:44:15.976572  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:44:15.976608  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:44:18.510427  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:44:18.520851  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:44:18.520924  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:44:18.547181  559515 cri.go:89] found id: ""
	I1210 08:44:18.547207  559515 logs.go:282] 0 containers: []
	W1210 08:44:18.547217  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:44:18.547223  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:44:18.547285  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:44:18.572505  559515 cri.go:89] found id: ""
	I1210 08:44:18.572531  559515 logs.go:282] 0 containers: []
	W1210 08:44:18.572540  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:44:18.572546  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:44:18.572609  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:44:18.602825  559515 cri.go:89] found id: ""
	I1210 08:44:18.602850  559515 logs.go:282] 0 containers: []
	W1210 08:44:18.602860  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:44:18.602866  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:44:18.602928  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:44:18.627892  559515 cri.go:89] found id: ""
	I1210 08:44:18.627918  559515 logs.go:282] 0 containers: []
	W1210 08:44:18.627928  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:44:18.627934  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:44:18.627996  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:44:18.653789  559515 cri.go:89] found id: ""
	I1210 08:44:18.653814  559515 logs.go:282] 0 containers: []
	W1210 08:44:18.653823  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:44:18.653851  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:44:18.653942  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:44:18.682018  559515 cri.go:89] found id: ""
	I1210 08:44:18.682047  559515 logs.go:282] 0 containers: []
	W1210 08:44:18.682056  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:44:18.682063  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:44:18.682123  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:44:18.708051  559515 cri.go:89] found id: ""
	I1210 08:44:18.708077  559515 logs.go:282] 0 containers: []
	W1210 08:44:18.708087  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:44:18.708093  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:44:18.708153  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:44:18.738788  559515 cri.go:89] found id: ""
	I1210 08:44:18.738814  559515 logs.go:282] 0 containers: []
	W1210 08:44:18.738824  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:44:18.738836  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:44:18.738848  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:44:18.771467  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:44:18.771503  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:44:18.800023  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:44:18.800050  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:44:18.868256  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:44:18.868293  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:44:18.884590  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:44:18.884671  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:44:18.965460  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:44:21.465731  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:44:21.475920  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:44:21.475992  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:44:21.506065  559515 cri.go:89] found id: ""
	I1210 08:44:21.506091  559515 logs.go:282] 0 containers: []
	W1210 08:44:21.506100  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:44:21.506106  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:44:21.506171  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:44:21.530897  559515 cri.go:89] found id: ""
	I1210 08:44:21.530922  559515 logs.go:282] 0 containers: []
	W1210 08:44:21.530931  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:44:21.530937  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:44:21.530999  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:44:21.556664  559515 cri.go:89] found id: ""
	I1210 08:44:21.556686  559515 logs.go:282] 0 containers: []
	W1210 08:44:21.556695  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:44:21.556700  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:44:21.556758  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:44:21.584466  559515 cri.go:89] found id: ""
	I1210 08:44:21.584494  559515 logs.go:282] 0 containers: []
	W1210 08:44:21.584503  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:44:21.584509  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:44:21.584574  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:44:21.609418  559515 cri.go:89] found id: ""
	I1210 08:44:21.609452  559515 logs.go:282] 0 containers: []
	W1210 08:44:21.609461  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:44:21.609467  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:44:21.609526  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:44:21.634990  559515 cri.go:89] found id: ""
	I1210 08:44:21.635034  559515 logs.go:282] 0 containers: []
	W1210 08:44:21.635043  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:44:21.635050  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:44:21.635115  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:44:21.661783  559515 cri.go:89] found id: ""
	I1210 08:44:21.661812  559515 logs.go:282] 0 containers: []
	W1210 08:44:21.661821  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:44:21.661827  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:44:21.661922  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:44:21.690000  559515 cri.go:89] found id: ""
	I1210 08:44:21.690024  559515 logs.go:282] 0 containers: []
	W1210 08:44:21.690034  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:44:21.690042  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:44:21.690055  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:44:21.757529  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:44:21.757565  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:44:21.773395  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:44:21.773429  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:44:21.836145  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:44:21.836167  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:44:21.836180  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:44:21.866397  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:44:21.866429  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:44:24.395779  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:44:24.405642  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:44:24.405713  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:44:24.430854  559515 cri.go:89] found id: ""
	I1210 08:44:24.430876  559515 logs.go:282] 0 containers: []
	W1210 08:44:24.430885  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:44:24.430892  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:44:24.430957  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:44:24.457514  559515 cri.go:89] found id: ""
	I1210 08:44:24.457540  559515 logs.go:282] 0 containers: []
	W1210 08:44:24.457549  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:44:24.457554  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:44:24.457624  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:44:24.484575  559515 cri.go:89] found id: ""
	I1210 08:44:24.484602  559515 logs.go:282] 0 containers: []
	W1210 08:44:24.484611  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:44:24.484623  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:44:24.484683  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:44:24.510040  559515 cri.go:89] found id: ""
	I1210 08:44:24.510067  559515 logs.go:282] 0 containers: []
	W1210 08:44:24.510077  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:44:24.510083  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:44:24.510143  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:44:24.540253  559515 cri.go:89] found id: ""
	I1210 08:44:24.540280  559515 logs.go:282] 0 containers: []
	W1210 08:44:24.540289  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:44:24.540296  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:44:24.540425  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:44:24.566273  559515 cri.go:89] found id: ""
	I1210 08:44:24.566300  559515 logs.go:282] 0 containers: []
	W1210 08:44:24.566309  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:44:24.566315  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:44:24.566374  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:44:24.594054  559515 cri.go:89] found id: ""
	I1210 08:44:24.594083  559515 logs.go:282] 0 containers: []
	W1210 08:44:24.594093  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:44:24.594098  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:44:24.594164  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:44:24.618760  559515 cri.go:89] found id: ""
	I1210 08:44:24.618783  559515 logs.go:282] 0 containers: []
	W1210 08:44:24.618792  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:44:24.618801  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:44:24.618813  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:44:24.686445  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:44:24.686481  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:44:24.702663  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:44:24.702751  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:44:24.780964  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:44:24.780983  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:44:24.780997  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:44:24.812358  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:44:24.812394  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:44:27.340929  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:44:27.351095  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:44:27.351175  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:44:27.377153  559515 cri.go:89] found id: ""
	I1210 08:44:27.377182  559515 logs.go:282] 0 containers: []
	W1210 08:44:27.377191  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:44:27.377197  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:44:27.377260  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:44:27.401733  559515 cri.go:89] found id: ""
	I1210 08:44:27.401760  559515 logs.go:282] 0 containers: []
	W1210 08:44:27.401769  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:44:27.401775  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:44:27.401832  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:44:27.426871  559515 cri.go:89] found id: ""
	I1210 08:44:27.426897  559515 logs.go:282] 0 containers: []
	W1210 08:44:27.426906  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:44:27.426912  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:44:27.426971  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:44:27.451727  559515 cri.go:89] found id: ""
	I1210 08:44:27.451751  559515 logs.go:282] 0 containers: []
	W1210 08:44:27.451760  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:44:27.451766  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:44:27.451830  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:44:27.482314  559515 cri.go:89] found id: ""
	I1210 08:44:27.482338  559515 logs.go:282] 0 containers: []
	W1210 08:44:27.482346  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:44:27.482352  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:44:27.482409  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:44:27.507763  559515 cri.go:89] found id: ""
	I1210 08:44:27.507786  559515 logs.go:282] 0 containers: []
	W1210 08:44:27.507794  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:44:27.507800  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:44:27.507858  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:44:27.533127  559515 cri.go:89] found id: ""
	I1210 08:44:27.533149  559515 logs.go:282] 0 containers: []
	W1210 08:44:27.533163  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:44:27.533169  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:44:27.533227  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:44:27.557255  559515 cri.go:89] found id: ""
	I1210 08:44:27.557278  559515 logs.go:282] 0 containers: []
	W1210 08:44:27.557287  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:44:27.557295  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:44:27.557306  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:44:27.624503  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:44:27.624542  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:44:27.640919  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:44:27.640952  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:44:27.705596  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:44:27.705660  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:44:27.705681  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:44:27.741728  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:44:27.741764  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:44:30.273477  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:44:30.284195  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:44:30.284278  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:44:30.310027  559515 cri.go:89] found id: ""
	I1210 08:44:30.310049  559515 logs.go:282] 0 containers: []
	W1210 08:44:30.310057  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:44:30.310063  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:44:30.310120  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:44:30.337215  559515 cri.go:89] found id: ""
	I1210 08:44:30.337239  559515 logs.go:282] 0 containers: []
	W1210 08:44:30.337248  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:44:30.337254  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:44:30.337314  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:44:30.362549  559515 cri.go:89] found id: ""
	I1210 08:44:30.362572  559515 logs.go:282] 0 containers: []
	W1210 08:44:30.362580  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:44:30.362586  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:44:30.362644  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:44:30.387348  559515 cri.go:89] found id: ""
	I1210 08:44:30.387383  559515 logs.go:282] 0 containers: []
	W1210 08:44:30.387392  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:44:30.387398  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:44:30.387456  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:44:30.416697  559515 cri.go:89] found id: ""
	I1210 08:44:30.416723  559515 logs.go:282] 0 containers: []
	W1210 08:44:30.416733  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:44:30.416739  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:44:30.416796  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:44:30.447020  559515 cri.go:89] found id: ""
	I1210 08:44:30.447044  559515 logs.go:282] 0 containers: []
	W1210 08:44:30.447053  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:44:30.447059  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:44:30.447115  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:44:30.474431  559515 cri.go:89] found id: ""
	I1210 08:44:30.474459  559515 logs.go:282] 0 containers: []
	W1210 08:44:30.474469  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:44:30.474475  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:44:30.474535  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:44:30.500400  559515 cri.go:89] found id: ""
	I1210 08:44:30.500424  559515 logs.go:282] 0 containers: []
	W1210 08:44:30.500433  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:44:30.500441  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:44:30.500460  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:44:30.566519  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:44:30.566541  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:44:30.566554  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:44:30.597138  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:44:30.597176  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:44:30.624391  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:44:30.624418  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:44:30.691296  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:44:30.691335  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:44:33.209503  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:44:33.225114  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:44:33.225183  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:44:33.258565  559515 cri.go:89] found id: ""
	I1210 08:44:33.258585  559515 logs.go:282] 0 containers: []
	W1210 08:44:33.258594  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:44:33.258600  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:44:33.258660  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:44:33.289523  559515 cri.go:89] found id: ""
	I1210 08:44:33.289545  559515 logs.go:282] 0 containers: []
	W1210 08:44:33.289554  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:44:33.289561  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:44:33.289621  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:44:33.329406  559515 cri.go:89] found id: ""
	I1210 08:44:33.329438  559515 logs.go:282] 0 containers: []
	W1210 08:44:33.329447  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:44:33.329456  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:44:33.329515  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:44:33.361078  559515 cri.go:89] found id: ""
	I1210 08:44:33.361098  559515 logs.go:282] 0 containers: []
	W1210 08:44:33.361107  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:44:33.361113  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:44:33.361177  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:44:33.410453  559515 cri.go:89] found id: ""
	I1210 08:44:33.410476  559515 logs.go:282] 0 containers: []
	W1210 08:44:33.410485  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:44:33.410490  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:44:33.410547  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:44:33.451111  559515 cri.go:89] found id: ""
	I1210 08:44:33.451132  559515 logs.go:282] 0 containers: []
	W1210 08:44:33.451141  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:44:33.451147  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:44:33.451210  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:44:33.484252  559515 cri.go:89] found id: ""
	I1210 08:44:33.484274  559515 logs.go:282] 0 containers: []
	W1210 08:44:33.484283  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:44:33.484289  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:44:33.484348  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:44:33.513455  559515 cri.go:89] found id: ""
	I1210 08:44:33.513477  559515 logs.go:282] 0 containers: []
	W1210 08:44:33.513485  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:44:33.513494  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:44:33.513506  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:44:33.580859  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:44:33.580895  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:44:33.597253  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:44:33.597286  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:44:33.657389  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:44:33.657456  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:44:33.657486  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:44:33.687640  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:44:33.687677  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:44:36.229925  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:44:36.247882  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:44:36.247958  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:44:36.281153  559515 cri.go:89] found id: ""
	I1210 08:44:36.281181  559515 logs.go:282] 0 containers: []
	W1210 08:44:36.281190  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:44:36.281197  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:44:36.281255  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:44:36.321612  559515 cri.go:89] found id: ""
	I1210 08:44:36.321641  559515 logs.go:282] 0 containers: []
	W1210 08:44:36.321649  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:44:36.321657  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:44:36.321716  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:44:36.353868  559515 cri.go:89] found id: ""
	I1210 08:44:36.353894  559515 logs.go:282] 0 containers: []
	W1210 08:44:36.353903  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:44:36.353909  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:44:36.353976  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:44:36.387965  559515 cri.go:89] found id: ""
	I1210 08:44:36.387992  559515 logs.go:282] 0 containers: []
	W1210 08:44:36.388001  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:44:36.388007  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:44:36.388071  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:44:36.427754  559515 cri.go:89] found id: ""
	I1210 08:44:36.427782  559515 logs.go:282] 0 containers: []
	W1210 08:44:36.427791  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:44:36.427798  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:44:36.427856  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:44:36.456364  559515 cri.go:89] found id: ""
	I1210 08:44:36.456434  559515 logs.go:282] 0 containers: []
	W1210 08:44:36.456445  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:44:36.456452  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:44:36.456539  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:44:36.509573  559515 cri.go:89] found id: ""
	I1210 08:44:36.509647  559515 logs.go:282] 0 containers: []
	W1210 08:44:36.509670  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:44:36.509689  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:44:36.509777  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:44:36.549545  559515 cri.go:89] found id: ""
	I1210 08:44:36.549624  559515 logs.go:282] 0 containers: []
	W1210 08:44:36.549648  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:44:36.549685  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:44:36.549714  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:44:36.637590  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:44:36.637670  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:44:36.653953  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:44:36.653985  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:44:36.751123  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:44:36.751150  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:44:36.751164  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:44:36.786277  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:44:36.786355  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:44:39.320866  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:44:39.330364  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:44:39.330431  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:44:39.356427  559515 cri.go:89] found id: ""
	I1210 08:44:39.356450  559515 logs.go:282] 0 containers: []
	W1210 08:44:39.356459  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:44:39.356465  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:44:39.356523  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:44:39.382787  559515 cri.go:89] found id: ""
	I1210 08:44:39.382811  559515 logs.go:282] 0 containers: []
	W1210 08:44:39.382821  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:44:39.382827  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:44:39.382886  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:44:39.407138  559515 cri.go:89] found id: ""
	I1210 08:44:39.407218  559515 logs.go:282] 0 containers: []
	W1210 08:44:39.407239  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:44:39.407246  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:44:39.407328  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:44:39.431952  559515 cri.go:89] found id: ""
	I1210 08:44:39.431976  559515 logs.go:282] 0 containers: []
	W1210 08:44:39.431984  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:44:39.431990  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:44:39.432056  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:44:39.459689  559515 cri.go:89] found id: ""
	I1210 08:44:39.459713  559515 logs.go:282] 0 containers: []
	W1210 08:44:39.459722  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:44:39.459728  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:44:39.459803  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:44:39.504635  559515 cri.go:89] found id: ""
	I1210 08:44:39.504671  559515 logs.go:282] 0 containers: []
	W1210 08:44:39.504680  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:44:39.504686  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:44:39.504782  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:44:39.535389  559515 cri.go:89] found id: ""
	I1210 08:44:39.535464  559515 logs.go:282] 0 containers: []
	W1210 08:44:39.535490  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:44:39.535511  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:44:39.535604  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:44:39.565330  559515 cri.go:89] found id: ""
	I1210 08:44:39.565418  559515 logs.go:282] 0 containers: []
	W1210 08:44:39.565443  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:44:39.565475  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:44:39.565507  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:44:39.644405  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:44:39.644488  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:44:39.661321  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:44:39.661346  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:44:39.759071  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:44:39.759133  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:44:39.759161  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:44:39.794693  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:44:39.794728  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:44:42.325836  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:44:42.336669  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:44:42.336745  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:44:42.368450  559515 cri.go:89] found id: ""
	I1210 08:44:42.368474  559515 logs.go:282] 0 containers: []
	W1210 08:44:42.368482  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:44:42.368488  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:44:42.368549  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:44:42.398763  559515 cri.go:89] found id: ""
	I1210 08:44:42.398786  559515 logs.go:282] 0 containers: []
	W1210 08:44:42.398794  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:44:42.398801  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:44:42.398859  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:44:42.424173  559515 cri.go:89] found id: ""
	I1210 08:44:42.424196  559515 logs.go:282] 0 containers: []
	W1210 08:44:42.424204  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:44:42.424210  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:44:42.424270  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:44:42.452957  559515 cri.go:89] found id: ""
	I1210 08:44:42.453034  559515 logs.go:282] 0 containers: []
	W1210 08:44:42.453058  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:44:42.453073  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:44:42.453153  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:44:42.478126  559515 cri.go:89] found id: ""
	I1210 08:44:42.478153  559515 logs.go:282] 0 containers: []
	W1210 08:44:42.478161  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:44:42.478167  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:44:42.478224  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:44:42.504672  559515 cri.go:89] found id: ""
	I1210 08:44:42.504700  559515 logs.go:282] 0 containers: []
	W1210 08:44:42.504709  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:44:42.504721  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:44:42.504781  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:44:42.530090  559515 cri.go:89] found id: ""
	I1210 08:44:42.530114  559515 logs.go:282] 0 containers: []
	W1210 08:44:42.530122  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:44:42.530128  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:44:42.530187  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:44:42.558144  559515 cri.go:89] found id: ""
	I1210 08:44:42.558167  559515 logs.go:282] 0 containers: []
	W1210 08:44:42.558176  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:44:42.558185  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:44:42.558196  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:44:42.622385  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:44:42.622407  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:44:42.622420  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:44:42.652817  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:44:42.652851  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 08:44:42.692968  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:44:42.692999  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:44:42.773243  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:44:42.773281  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:44:45.294225  559515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:44:45.304742  559515 kubeadm.go:602] duration metric: took 4m2.364957457s to restartPrimaryControlPlane
	W1210 08:44:45.304816  559515 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 08:44:45.304886  559515 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 08:44:45.728383  559515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 08:44:45.746316  559515 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 08:44:45.759536  559515 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 08:44:45.759607  559515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 08:44:45.771517  559515 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 08:44:45.771537  559515 kubeadm.go:158] found existing configuration files:
	
	I1210 08:44:45.771592  559515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 08:44:45.784954  559515 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 08:44:45.785017  559515 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 08:44:45.793473  559515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 08:44:45.803490  559515 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 08:44:45.803552  559515 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 08:44:45.812801  559515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 08:44:45.823456  559515 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 08:44:45.823522  559515 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 08:44:45.833317  559515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 08:44:45.842370  559515 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 08:44:45.842486  559515 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 08:44:45.850787  559515 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 08:44:45.899499  559515 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 08:44:45.899725  559515 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 08:44:46.022006  559515 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 08:44:46.022163  559515 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 08:44:46.022229  559515 kubeadm.go:319] OS: Linux
	I1210 08:44:46.022308  559515 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 08:44:46.022377  559515 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 08:44:46.022452  559515 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 08:44:46.022536  559515 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 08:44:46.022620  559515 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 08:44:46.022706  559515 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 08:44:46.022788  559515 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 08:44:46.022867  559515 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 08:44:46.022935  559515 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 08:44:46.111761  559515 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 08:44:46.111933  559515 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 08:44:46.112059  559515 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 08:44:46.120958  559515 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 08:44:46.124694  559515 out.go:252]   - Generating certificates and keys ...
	I1210 08:44:46.124840  559515 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 08:44:46.124942  559515 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 08:44:46.125046  559515 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 08:44:46.125132  559515 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 08:44:46.125247  559515 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 08:44:46.125325  559515 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 08:44:46.125435  559515 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 08:44:46.125519  559515 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 08:44:46.125903  559515 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 08:44:46.126459  559515 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 08:44:46.126950  559515 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 08:44:46.127332  559515 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 08:44:46.255906  559515 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 08:44:46.470334  559515 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 08:44:46.551882  559515 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 08:44:47.032529  559515 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 08:44:47.142996  559515 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 08:44:47.143844  559515 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 08:44:47.146393  559515 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 08:44:47.149594  559515 out.go:252]   - Booting up control plane ...
	I1210 08:44:47.149698  559515 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 08:44:47.149776  559515 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 08:44:47.149843  559515 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 08:44:47.166496  559515 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 08:44:47.166613  559515 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 08:44:47.174164  559515 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 08:44:47.174469  559515 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 08:44:47.174701  559515 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 08:44:47.310106  559515 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 08:44:47.310227  559515 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 08:48:47.310820  559515 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001096195s
	I1210 08:48:47.311245  559515 kubeadm.go:319] 
	I1210 08:48:47.311333  559515 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 08:48:47.311374  559515 kubeadm.go:319] 	- The kubelet is not running
	I1210 08:48:47.311503  559515 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 08:48:47.311515  559515 kubeadm.go:319] 
	I1210 08:48:47.311644  559515 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 08:48:47.311679  559515 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 08:48:47.311717  559515 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 08:48:47.311726  559515 kubeadm.go:319] 
	I1210 08:48:47.315881  559515 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 08:48:47.316383  559515 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 08:48:47.316523  559515 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 08:48:47.316821  559515 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1210 08:48:47.316831  559515 kubeadm.go:319] 
	I1210 08:48:47.316904  559515 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1210 08:48:47.317040  559515 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001096195s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001096195s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 08:48:47.317136  559515 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 08:48:47.740448  559515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 08:48:47.753700  559515 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 08:48:47.753767  559515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 08:48:47.761341  559515 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 08:48:47.761363  559515 kubeadm.go:158] found existing configuration files:
	
	I1210 08:48:47.761423  559515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 08:48:47.769049  559515 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 08:48:47.769115  559515 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 08:48:47.776416  559515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 08:48:47.784425  559515 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 08:48:47.784491  559515 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 08:48:47.791648  559515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 08:48:47.799477  559515 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 08:48:47.799555  559515 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 08:48:47.806922  559515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 08:48:47.814642  559515 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 08:48:47.814712  559515 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 08:48:47.821810  559515 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 08:48:47.861901  559515 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1210 08:48:47.862238  559515 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 08:48:47.935006  559515 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 08:48:47.935099  559515 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 08:48:47.935146  559515 kubeadm.go:319] OS: Linux
	I1210 08:48:47.935196  559515 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 08:48:47.935254  559515 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 08:48:47.935311  559515 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 08:48:47.935371  559515 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 08:48:47.935423  559515 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 08:48:47.935483  559515 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 08:48:47.935539  559515 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 08:48:47.935596  559515 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 08:48:47.935646  559515 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 08:48:47.996665  559515 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 08:48:47.996792  559515 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 08:48:47.996892  559515 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 08:48:48.011540  559515 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 08:48:48.017150  559515 out.go:252]   - Generating certificates and keys ...
	I1210 08:48:48.017295  559515 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 08:48:48.017404  559515 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 08:48:48.017522  559515 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 08:48:48.017622  559515 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1210 08:48:48.017710  559515 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 08:48:48.017799  559515 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1210 08:48:48.017889  559515 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1210 08:48:48.017990  559515 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1210 08:48:48.018119  559515 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 08:48:48.018237  559515 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 08:48:48.018293  559515 kubeadm.go:319] [certs] Using the existing "sa" key
	I1210 08:48:48.018393  559515 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 08:48:48.303791  559515 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 08:48:48.444096  559515 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 08:48:48.625182  559515 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 08:48:48.691031  559515 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 08:48:48.905375  559515 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 08:48:48.906131  559515 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 08:48:48.910608  559515 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 08:48:48.913821  559515 out.go:252]   - Booting up control plane ...
	I1210 08:48:48.913928  559515 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 08:48:48.914023  559515 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 08:48:48.914532  559515 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 08:48:48.929962  559515 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 08:48:48.930294  559515 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 08:48:48.937539  559515 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 08:48:48.937938  559515 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 08:48:48.938149  559515 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 08:48:49.069981  559515 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 08:48:49.070100  559515 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 08:52:49.070241  559515 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000630101s
	I1210 08:52:49.070275  559515 kubeadm.go:319] 
	I1210 08:52:49.070333  559515 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 08:52:49.070367  559515 kubeadm.go:319] 	- The kubelet is not running
	I1210 08:52:49.070473  559515 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 08:52:49.070479  559515 kubeadm.go:319] 
	I1210 08:52:49.070584  559515 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 08:52:49.070616  559515 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 08:52:49.070647  559515 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 08:52:49.070652  559515 kubeadm.go:319] 
	I1210 08:52:49.074274  559515 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 08:52:49.074694  559515 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 08:52:49.074802  559515 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 08:52:49.075047  559515 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 08:52:49.075054  559515 kubeadm.go:319] 
	I1210 08:52:49.075122  559515 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 08:52:49.075172  559515 kubeadm.go:403] duration metric: took 12m6.229980771s to StartCluster
	I1210 08:52:49.075205  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:52:49.075263  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:52:49.133290  559515 cri.go:89] found id: ""
	I1210 08:52:49.133314  559515 logs.go:282] 0 containers: []
	W1210 08:52:49.133323  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:52:49.133330  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:52:49.133395  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:52:49.179903  559515 cri.go:89] found id: ""
	I1210 08:52:49.179925  559515 logs.go:282] 0 containers: []
	W1210 08:52:49.179933  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:52:49.179940  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:52:49.179997  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:52:49.229615  559515 cri.go:89] found id: ""
	I1210 08:52:49.229638  559515 logs.go:282] 0 containers: []
	W1210 08:52:49.229647  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:52:49.229658  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:52:49.229716  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:52:49.274404  559515 cri.go:89] found id: ""
	I1210 08:52:49.274427  559515 logs.go:282] 0 containers: []
	W1210 08:52:49.274435  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:52:49.274441  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:52:49.274501  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:52:49.329110  559515 cri.go:89] found id: ""
	I1210 08:52:49.329189  559515 logs.go:282] 0 containers: []
	W1210 08:52:49.329212  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:52:49.329229  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:52:49.329343  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:52:49.380464  559515 cri.go:89] found id: ""
	I1210 08:52:49.380538  559515 logs.go:282] 0 containers: []
	W1210 08:52:49.380579  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:52:49.380601  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:52:49.380689  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:52:49.456104  559515 cri.go:89] found id: ""
	I1210 08:52:49.456178  559515 logs.go:282] 0 containers: []
	W1210 08:52:49.456201  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:52:49.456222  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:52:49.456332  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:52:49.532672  559515 cri.go:89] found id: ""
	I1210 08:52:49.532761  559515 logs.go:282] 0 containers: []
	W1210 08:52:49.532785  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:52:49.532826  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:52:49.532855  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:52:49.635747  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:52:49.635823  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:52:49.655415  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:52:49.655497  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:52:49.775827  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:52:49.775899  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:52:49.775925  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:52:49.835125  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:52:49.835200  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 08:52:49.895453  559515 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000630101s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 08:52:49.895511  559515 out.go:285] * 
	* 
	W1210 08:52:49.895562  559515 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000630101s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000630101s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 08:52:49.895571  559515 out.go:285] * 
	* 
	W1210 08:52:49.897703  559515 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 08:52:49.903271  559515 out.go:203] 
	W1210 08:52:49.907070  559515 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000630101s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000630101s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 08:52:49.907125  559515 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 08:52:49.907145  559515 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 08:52:49.910310  559515 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-470056 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-470056 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-470056 version --output=json: exit status 1 (194.395591ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.85.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-10 08:52:51.227797983 +0000 UTC m=+5306.501604537
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-470056
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-470056:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f640952afb333e7c6d523807cd47fa65efd138506afc0fb3d3ce31620a64404b",
	        "Created": "2025-12-10T08:40:01.158732507Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 559640,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T08:40:30.485468473Z",
	            "FinishedAt": "2025-12-10T08:40:29.345137205Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/f640952afb333e7c6d523807cd47fa65efd138506afc0fb3d3ce31620a64404b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f640952afb333e7c6d523807cd47fa65efd138506afc0fb3d3ce31620a64404b/hostname",
	        "HostsPath": "/var/lib/docker/containers/f640952afb333e7c6d523807cd47fa65efd138506afc0fb3d3ce31620a64404b/hosts",
	        "LogPath": "/var/lib/docker/containers/f640952afb333e7c6d523807cd47fa65efd138506afc0fb3d3ce31620a64404b/f640952afb333e7c6d523807cd47fa65efd138506afc0fb3d3ce31620a64404b-json.log",
	        "Name": "/kubernetes-upgrade-470056",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "kubernetes-upgrade-470056:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-470056",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f640952afb333e7c6d523807cd47fa65efd138506afc0fb3d3ce31620a64404b",
	                "LowerDir": "/var/lib/docker/overlay2/65b962137e7ed74ab0a52b1e0ab3fd744898658c4cc164c1fc932d98eeebe089-init/diff:/var/lib/docker/overlay2/888a54fd4518421bb2e30bc2f0825f232bffa2f260991e8ba288270662a6554b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/65b962137e7ed74ab0a52b1e0ab3fd744898658c4cc164c1fc932d98eeebe089/merged",
	                "UpperDir": "/var/lib/docker/overlay2/65b962137e7ed74ab0a52b1e0ab3fd744898658c4cc164c1fc932d98eeebe089/diff",
	                "WorkDir": "/var/lib/docker/overlay2/65b962137e7ed74ab0a52b1e0ab3fd744898658c4cc164c1fc932d98eeebe089/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-470056",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-470056/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-470056",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-470056",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-470056",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "63f72fafb4551ec59cd0b70af5315996aa7b14c2bd9d9c2d3e086f9d6af6e18a",
	            "SandboxKey": "/var/run/docker/netns/63f72fafb455",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33388"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33389"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33392"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33390"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33391"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-470056": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:d5:23:e3:82:17",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "79704df549047ad7d9d2c518cb259397438f2ea488e847de2bcd52a622c963d0",
	                    "EndpointID": "76c9b36ad8576460b66dbd52f8a09e6a55070e5cd55add9f4511a210d10e7f9c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-470056",
	                        "f640952afb33"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-470056 -n kubernetes-upgrade-470056
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-470056 -n kubernetes-upgrade-470056: exit status 2 (424.402463ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-470056 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p kubernetes-upgrade-470056 logs -n 25: (1.102429151s)
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-831378 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-831378            │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-831378 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-831378            │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-831378 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-831378            │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-831378 sudo systemctl status docker --all --full --no-pager                                      │ cilium-831378            │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-831378 sudo systemctl cat docker --no-pager                                                      │ cilium-831378            │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-831378 sudo cat /etc/docker/daemon.json                                                          │ cilium-831378            │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-831378 sudo docker system info                                                                   │ cilium-831378            │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-831378 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-831378            │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-831378 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-831378            │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-831378 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-831378            │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-831378 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-831378            │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-831378 sudo cri-dockerd --version                                                                │ cilium-831378            │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-831378 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-831378            │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-831378 sudo systemctl cat containerd --no-pager                                                  │ cilium-831378            │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-831378 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-831378            │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-831378 sudo cat /etc/containerd/config.toml                                                      │ cilium-831378            │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-831378 sudo containerd config dump                                                               │ cilium-831378            │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-831378 sudo systemctl status crio --all --full --no-pager                                        │ cilium-831378            │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-831378 sudo systemctl cat crio --no-pager                                                        │ cilium-831378            │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-831378 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-831378            │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │                     │
	│ ssh     │ -p cilium-831378 sudo crio config                                                                          │ cilium-831378            │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │                     │
	│ delete  │ -p cilium-831378                                                                                           │ cilium-831378            │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │ 10 Dec 25 08:51 UTC │
	│ start   │ -p force-systemd-env-236305 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-236305 │ jenkins │ v1.37.0 │ 10 Dec 25 08:51 UTC │ 10 Dec 25 08:52 UTC │
	│ delete  │ -p force-systemd-env-236305                                                                                │ force-systemd-env-236305 │ jenkins │ v1.37.0 │ 10 Dec 25 08:52 UTC │ 10 Dec 25 08:52 UTC │
	│ start   │ -p cert-expiration-682065 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio     │ cert-expiration-682065   │ jenkins │ v1.37.0 │ 10 Dec 25 08:52 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 08:52:25
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 08:52:25.040255  598084 out.go:360] Setting OutFile to fd 1 ...
	I1210 08:52:25.040410  598084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:52:25.040415  598084 out.go:374] Setting ErrFile to fd 2...
	I1210 08:52:25.040419  598084 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:52:25.040697  598084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 08:52:25.041168  598084 out.go:368] Setting JSON to false
	I1210 08:52:25.042087  598084 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12895,"bootTime":1765343850,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 08:52:25.042154  598084 start.go:143] virtualization:  
	I1210 08:52:25.047791  598084 out.go:179] * [cert-expiration-682065] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 08:52:25.051569  598084 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 08:52:25.051642  598084 notify.go:221] Checking for updates...
	I1210 08:52:25.058604  598084 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 08:52:25.061969  598084 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 08:52:25.065289  598084 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 08:52:25.068663  598084 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 08:52:25.071781  598084 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 08:52:25.075650  598084 config.go:182] Loaded profile config "kubernetes-upgrade-470056": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 08:52:25.075774  598084 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 08:52:25.107703  598084 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 08:52:25.107838  598084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 08:52:25.167902  598084 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 08:52:25.15812353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 08:52:25.168004  598084 docker.go:319] overlay module found
	I1210 08:52:25.171248  598084 out.go:179] * Using the docker driver based on user configuration
	I1210 08:52:25.174307  598084 start.go:309] selected driver: docker
	I1210 08:52:25.174322  598084 start.go:927] validating driver "docker" against <nil>
	I1210 08:52:25.174334  598084 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 08:52:25.175623  598084 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 08:52:25.232259  598084 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 08:52:25.222452433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 08:52:25.232406  598084 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 08:52:25.232615  598084 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 08:52:25.235649  598084 out.go:179] * Using Docker driver with root privileges
	I1210 08:52:25.238704  598084 cni.go:84] Creating CNI manager for ""
	I1210 08:52:25.238772  598084 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 08:52:25.238780  598084 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 08:52:25.238861  598084 start.go:353] cluster config:
	{Name:cert-expiration-682065 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-682065 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 08:52:25.242128  598084 out.go:179] * Starting "cert-expiration-682065" primary control-plane node in "cert-expiration-682065" cluster
	I1210 08:52:25.245073  598084 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 08:52:25.248067  598084 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 08:52:25.251036  598084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 08:52:25.251080  598084 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1210 08:52:25.251077  598084 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 08:52:25.251104  598084 cache.go:65] Caching tarball of preloaded images
	I1210 08:52:25.251195  598084 preload.go:238] Found /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1210 08:52:25.251204  598084 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1210 08:52:25.251315  598084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/config.json ...
	I1210 08:52:25.251331  598084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/config.json: {Name:mk6bd141e7e22af4a09a58736e012586fa8520ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:52:25.271752  598084 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 08:52:25.271764  598084 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 08:52:25.271782  598084 cache.go:243] Successfully downloaded all kic artifacts
	I1210 08:52:25.271824  598084 start.go:360] acquireMachinesLock for cert-expiration-682065: {Name:mkbceb4cd00490e08b95e152e8bff4eb555dcdd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 08:52:25.271937  598084 start.go:364] duration metric: took 99.028µs to acquireMachinesLock for "cert-expiration-682065"
	I1210 08:52:25.271963  598084 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-682065 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-682065 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 08:52:25.272026  598084 start.go:125] createHost starting for "" (driver="docker")
	I1210 08:52:25.275611  598084 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1210 08:52:25.275855  598084 start.go:159] libmachine.API.Create for "cert-expiration-682065" (driver="docker")
	I1210 08:52:25.275892  598084 client.go:173] LocalClient.Create starting
	I1210 08:52:25.275965  598084 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem
	I1210 08:52:25.276003  598084 main.go:143] libmachine: Decoding PEM data...
	I1210 08:52:25.276019  598084 main.go:143] libmachine: Parsing certificate...
	I1210 08:52:25.276079  598084 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem
	I1210 08:52:25.276107  598084 main.go:143] libmachine: Decoding PEM data...
	I1210 08:52:25.276126  598084 main.go:143] libmachine: Parsing certificate...
	I1210 08:52:25.276530  598084 cli_runner.go:164] Run: docker network inspect cert-expiration-682065 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1210 08:52:25.297271  598084 cli_runner.go:211] docker network inspect cert-expiration-682065 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1210 08:52:25.297355  598084 network_create.go:284] running [docker network inspect cert-expiration-682065] to gather additional debugging logs...
	I1210 08:52:25.297370  598084 cli_runner.go:164] Run: docker network inspect cert-expiration-682065
	W1210 08:52:25.314034  598084 cli_runner.go:211] docker network inspect cert-expiration-682065 returned with exit code 1
	I1210 08:52:25.314054  598084 network_create.go:287] error running [docker network inspect cert-expiration-682065]: docker network inspect cert-expiration-682065: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-682065 not found
	I1210 08:52:25.314066  598084 network_create.go:289] output of [docker network inspect cert-expiration-682065]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-682065 not found
	
	** /stderr **
	I1210 08:52:25.314211  598084 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 08:52:25.333441  598084 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8c4372b9c6ca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:2b:b4:53:83:a1} reservation:<nil>}
	I1210 08:52:25.333923  598084 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-abcdafdcc359 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5a:da:0e:fa:08:36} reservation:<nil>}
	I1210 08:52:25.334360  598084 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2ca41742b403 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:5a:51:68:6f:bb:67} reservation:<nil>}
	I1210 08:52:25.334883  598084 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d5380}
	I1210 08:52:25.334905  598084 network_create.go:124] attempt to create docker network cert-expiration-682065 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1210 08:52:25.334992  598084 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-682065 cert-expiration-682065
	I1210 08:52:25.397787  598084 network_create.go:108] docker network cert-expiration-682065 192.168.76.0/24 created
	I1210 08:52:25.397811  598084 kic.go:121] calculated static IP "192.168.76.2" for the "cert-expiration-682065" container
	I1210 08:52:25.397890  598084 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1210 08:52:25.434503  598084 cli_runner.go:164] Run: docker volume create cert-expiration-682065 --label name.minikube.sigs.k8s.io=cert-expiration-682065 --label created_by.minikube.sigs.k8s.io=true
	I1210 08:52:25.459160  598084 oci.go:103] Successfully created a docker volume cert-expiration-682065
	I1210 08:52:25.459263  598084 cli_runner.go:164] Run: docker run --rm --name cert-expiration-682065-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-682065 --entrypoint /usr/bin/test -v cert-expiration-682065:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -d /var/lib
	I1210 08:52:26.044431  598084 oci.go:107] Successfully prepared a docker volume cert-expiration-682065
	I1210 08:52:26.044499  598084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 08:52:26.044508  598084 kic.go:194] Starting extracting preloaded images to volume ...
	I1210 08:52:26.044585  598084 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-682065:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir
	I1210 08:52:30.088939  598084 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-682065:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca -I lz4 -xf /preloaded.tar -C /extractDir: (4.044317752s)
	I1210 08:52:30.088963  598084 kic.go:203] duration metric: took 4.044451325s to extract preloaded images to volume ...
	W1210 08:52:30.089109  598084 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1210 08:52:30.089244  598084 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1210 08:52:30.150052  598084 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-682065 --name cert-expiration-682065 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-682065 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-682065 --network cert-expiration-682065 --ip 192.168.76.2 --volume cert-expiration-682065:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca
	I1210 08:52:30.485571  598084 cli_runner.go:164] Run: docker container inspect cert-expiration-682065 --format={{.State.Running}}
	I1210 08:52:30.503856  598084 cli_runner.go:164] Run: docker container inspect cert-expiration-682065 --format={{.State.Status}}
	I1210 08:52:30.530694  598084 cli_runner.go:164] Run: docker exec cert-expiration-682065 stat /var/lib/dpkg/alternatives/iptables
	I1210 08:52:30.581303  598084 oci.go:144] the created container "cert-expiration-682065" has a running status.
	I1210 08:52:30.581323  598084 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/cert-expiration-682065/id_rsa...
	I1210 08:52:30.832001  598084 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22089-376671/.minikube/machines/cert-expiration-682065/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1210 08:52:30.860570  598084 cli_runner.go:164] Run: docker container inspect cert-expiration-682065 --format={{.State.Status}}
	I1210 08:52:30.888829  598084 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1210 08:52:30.888845  598084 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-682065 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1210 08:52:30.954063  598084 cli_runner.go:164] Run: docker container inspect cert-expiration-682065 --format={{.State.Status}}
	I1210 08:52:30.980638  598084 machine.go:94] provisionDockerMachine start ...
	I1210 08:52:30.980727  598084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-682065
	I1210 08:52:31.003129  598084 main.go:143] libmachine: Using SSH client type: native
	I1210 08:52:31.003478  598084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1210 08:52:31.003497  598084 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 08:52:31.004317  598084 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37338->127.0.0.1:33418: read: connection reset by peer
	I1210 08:52:34.142797  598084 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-682065
	
	I1210 08:52:34.142814  598084 ubuntu.go:182] provisioning hostname "cert-expiration-682065"
	I1210 08:52:34.142878  598084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-682065
	I1210 08:52:34.161780  598084 main.go:143] libmachine: Using SSH client type: native
	I1210 08:52:34.162099  598084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1210 08:52:34.162109  598084 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-682065 && echo "cert-expiration-682065" | sudo tee /etc/hostname
	I1210 08:52:34.308567  598084 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-682065
	
	I1210 08:52:34.308660  598084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-682065
	I1210 08:52:34.326028  598084 main.go:143] libmachine: Using SSH client type: native
	I1210 08:52:34.326333  598084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1210 08:52:34.326350  598084 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-682065' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-682065/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-682065' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 08:52:34.459403  598084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 08:52:34.459419  598084 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-376671/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-376671/.minikube}
	I1210 08:52:34.459452  598084 ubuntu.go:190] setting up certificates
	I1210 08:52:34.459469  598084 provision.go:84] configureAuth start
	I1210 08:52:34.459534  598084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-682065
	I1210 08:52:34.477175  598084 provision.go:143] copyHostCerts
	I1210 08:52:34.477255  598084 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem, removing ...
	I1210 08:52:34.477274  598084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem
	I1210 08:52:34.477354  598084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem (1082 bytes)
	I1210 08:52:34.477454  598084 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem, removing ...
	I1210 08:52:34.477458  598084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem
	I1210 08:52:34.477482  598084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem (1123 bytes)
	I1210 08:52:34.477539  598084 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem, removing ...
	I1210 08:52:34.477542  598084 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem
	I1210 08:52:34.477564  598084 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem (1675 bytes)
	I1210 08:52:34.477619  598084 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-682065 san=[127.0.0.1 192.168.76.2 cert-expiration-682065 localhost minikube]
	I1210 08:52:35.024842  598084 provision.go:177] copyRemoteCerts
	I1210 08:52:35.024906  598084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 08:52:35.024946  598084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-682065
	I1210 08:52:35.043485  598084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/cert-expiration-682065/id_rsa Username:docker}
	I1210 08:52:35.140110  598084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 08:52:35.160868  598084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1210 08:52:35.181122  598084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 08:52:35.201500  598084 provision.go:87] duration metric: took 742.011135ms to configureAuth
	I1210 08:52:35.201518  598084 ubuntu.go:206] setting minikube options for container-runtime
	I1210 08:52:35.201710  598084 config.go:182] Loaded profile config "cert-expiration-682065": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 08:52:35.201822  598084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-682065
	I1210 08:52:35.223142  598084 main.go:143] libmachine: Using SSH client type: native
	I1210 08:52:35.223470  598084 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1210 08:52:35.223483  598084 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 08:52:35.510472  598084 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 08:52:35.510486  598084 machine.go:97] duration metric: took 4.529836376s to provisionDockerMachine
	I1210 08:52:35.510494  598084 client.go:176] duration metric: took 10.234597977s to LocalClient.Create
	I1210 08:52:35.510507  598084 start.go:167] duration metric: took 10.23465369s to libmachine.API.Create "cert-expiration-682065"
	I1210 08:52:35.510513  598084 start.go:293] postStartSetup for "cert-expiration-682065" (driver="docker")
	I1210 08:52:35.510523  598084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 08:52:35.510595  598084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 08:52:35.510999  598084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-682065
	I1210 08:52:35.536178  598084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/cert-expiration-682065/id_rsa Username:docker}
	I1210 08:52:35.631143  598084 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 08:52:35.634341  598084 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 08:52:35.634359  598084 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 08:52:35.634369  598084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/addons for local assets ...
	I1210 08:52:35.634423  598084 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/files for local assets ...
	I1210 08:52:35.634509  598084 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> 3785282.pem in /etc/ssl/certs
	I1210 08:52:35.634604  598084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 08:52:35.642153  598084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 08:52:35.659702  598084 start.go:296] duration metric: took 149.176273ms for postStartSetup
	I1210 08:52:35.660059  598084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-682065
	I1210 08:52:35.676464  598084 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/config.json ...
	I1210 08:52:35.676741  598084 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 08:52:35.676782  598084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-682065
	I1210 08:52:35.692778  598084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/cert-expiration-682065/id_rsa Username:docker}
	I1210 08:52:35.788077  598084 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 08:52:35.792588  598084 start.go:128] duration metric: took 10.520548776s to createHost
	I1210 08:52:35.792603  598084 start.go:83] releasing machines lock for "cert-expiration-682065", held for 10.520659128s
	I1210 08:52:35.792674  598084 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-682065
	I1210 08:52:35.809678  598084 ssh_runner.go:195] Run: cat /version.json
	I1210 08:52:35.809722  598084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-682065
	I1210 08:52:35.809958  598084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 08:52:35.810009  598084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-682065
	I1210 08:52:35.827208  598084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/cert-expiration-682065/id_rsa Username:docker}
	I1210 08:52:35.835158  598084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/cert-expiration-682065/id_rsa Username:docker}
	I1210 08:52:35.927263  598084 ssh_runner.go:195] Run: systemctl --version
	I1210 08:52:36.032986  598084 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 08:52:36.070802  598084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 08:52:36.075363  598084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 08:52:36.075436  598084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 08:52:36.104707  598084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1210 08:52:36.104743  598084 start.go:496] detecting cgroup driver to use...
	I1210 08:52:36.104776  598084 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 08:52:36.104826  598084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 08:52:36.123308  598084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 08:52:36.136137  598084 docker.go:218] disabling cri-docker service (if available) ...
	I1210 08:52:36.136201  598084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 08:52:36.154196  598084 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 08:52:36.172452  598084 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 08:52:36.293920  598084 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 08:52:36.423869  598084 docker.go:234] disabling docker service ...
	I1210 08:52:36.423942  598084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 08:52:36.444363  598084 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 08:52:36.457528  598084 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 08:52:36.581866  598084 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 08:52:36.711772  598084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 08:52:36.726878  598084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 08:52:36.742709  598084 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 08:52:36.742776  598084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:52:36.751764  598084 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 08:52:36.751824  598084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:52:36.760633  598084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:52:36.769892  598084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:52:36.778855  598084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 08:52:36.787578  598084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:52:36.796979  598084 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:52:36.810584  598084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:52:36.819594  598084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 08:52:36.827002  598084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 08:52:36.834363  598084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 08:52:36.951239  598084 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 08:52:37.149046  598084 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 08:52:37.149107  598084 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 08:52:37.152818  598084 start.go:564] Will wait 60s for crictl version
	I1210 08:52:37.152877  598084 ssh_runner.go:195] Run: which crictl
	I1210 08:52:37.156546  598084 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 08:52:37.180722  598084 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 08:52:37.180812  598084 ssh_runner.go:195] Run: crio --version
	I1210 08:52:37.209755  598084 ssh_runner.go:195] Run: crio --version
	I1210 08:52:37.245494  598084 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1210 08:52:37.248285  598084 cli_runner.go:164] Run: docker network inspect cert-expiration-682065 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 08:52:37.264327  598084 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1210 08:52:37.268106  598084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 08:52:37.278243  598084 kubeadm.go:884] updating cluster {Name:cert-expiration-682065 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-682065 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 08:52:37.278346  598084 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1210 08:52:37.278402  598084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 08:52:37.310020  598084 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 08:52:37.310033  598084 crio.go:433] Images already preloaded, skipping extraction
	I1210 08:52:37.310090  598084 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 08:52:37.336342  598084 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 08:52:37.336357  598084 cache_images.go:86] Images are preloaded, skipping loading
	I1210 08:52:37.336363  598084 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.2 crio true true} ...
	I1210 08:52:37.336447  598084 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-682065 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-682065 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 08:52:37.336526  598084 ssh_runner.go:195] Run: crio config
	I1210 08:52:37.399815  598084 cni.go:84] Creating CNI manager for ""
	I1210 08:52:37.399826  598084 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 08:52:37.399840  598084 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 08:52:37.399861  598084 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-682065 NodeName:cert-expiration-682065 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 08:52:37.399977  598084 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-682065"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 08:52:37.400044  598084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1210 08:52:37.408000  598084 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 08:52:37.408066  598084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 08:52:37.417544  598084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1210 08:52:37.432694  598084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 08:52:37.448069  598084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1210 08:52:37.462200  598084 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1210 08:52:37.466034  598084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 08:52:37.478204  598084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 08:52:37.594586  598084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 08:52:37.609944  598084 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065 for IP: 192.168.76.2
	I1210 08:52:37.609955  598084 certs.go:195] generating shared ca certs ...
	I1210 08:52:37.609970  598084 certs.go:227] acquiring lock for ca certs: {Name:mk8f0c12407c084bd20b81428ac0dabca9a8cbd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:52:37.610123  598084 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key
	I1210 08:52:37.610169  598084 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key
	I1210 08:52:37.610176  598084 certs.go:257] generating profile certs ...
	I1210 08:52:37.610230  598084 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/client.key
	I1210 08:52:37.610240  598084 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/client.crt with IP's: []
	I1210 08:52:38.551778  598084 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/client.crt ...
	I1210 08:52:38.551797  598084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/client.crt: {Name:mk3923dedee2882bab30fa1a859740fcbbb7f3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:52:38.552003  598084 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/client.key ...
	I1210 08:52:38.552011  598084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/client.key: {Name:mka2107c948f4aa30abdcb452c40fc364308ba31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:52:38.552105  598084 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/apiserver.key.da81fab3
	I1210 08:52:38.552118  598084 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/apiserver.crt.da81fab3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1210 08:52:38.685550  598084 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/apiserver.crt.da81fab3 ...
	I1210 08:52:38.685564  598084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/apiserver.crt.da81fab3: {Name:mk18986557afb1eb63cca9d646f8760f33e7165e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:52:38.685774  598084 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/apiserver.key.da81fab3 ...
	I1210 08:52:38.685783  598084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/apiserver.key.da81fab3: {Name:mk2db3e1bf1ae34d0fbf88b72124e115a604c60c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:52:38.685861  598084 certs.go:382] copying /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/apiserver.crt.da81fab3 -> /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/apiserver.crt
	I1210 08:52:38.685931  598084 certs.go:386] copying /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/apiserver.key.da81fab3 -> /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/apiserver.key
	I1210 08:52:38.685984  598084 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/proxy-client.key
	I1210 08:52:38.685996  598084 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/proxy-client.crt with IP's: []
	I1210 08:52:38.915952  598084 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/proxy-client.crt ...
	I1210 08:52:38.915971  598084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/proxy-client.crt: {Name:mk44fa8504cf31390185847ee2a8440b4be84e92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:52:38.916150  598084 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/proxy-client.key ...
	I1210 08:52:38.916157  598084 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/proxy-client.key: {Name:mk35858e47773ac5da95c319d9444b04931a03ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:52:38.916335  598084 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem (1338 bytes)
	W1210 08:52:38.916379  598084 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528_empty.pem, impossibly tiny 0 bytes
	I1210 08:52:38.916386  598084 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 08:52:38.916414  598084 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem (1082 bytes)
	I1210 08:52:38.916440  598084 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem (1123 bytes)
	I1210 08:52:38.916465  598084 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem (1675 bytes)
	I1210 08:52:38.916512  598084 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 08:52:38.917118  598084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 08:52:38.938528  598084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 08:52:38.959325  598084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 08:52:38.981348  598084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 08:52:38.999172  598084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1210 08:52:39.020283  598084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 08:52:39.038623  598084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 08:52:39.055830  598084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/cert-expiration-682065/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 08:52:39.072991  598084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem --> /usr/share/ca-certificates/378528.pem (1338 bytes)
	I1210 08:52:39.090071  598084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /usr/share/ca-certificates/3785282.pem (1708 bytes)
	I1210 08:52:39.107666  598084 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 08:52:39.127813  598084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 08:52:39.140184  598084 ssh_runner.go:195] Run: openssl version
	I1210 08:52:39.146328  598084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3785282.pem
	I1210 08:52:39.154048  598084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3785282.pem /etc/ssl/certs/3785282.pem
	I1210 08:52:39.161338  598084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3785282.pem
	I1210 08:52:39.165475  598084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 07:35 /usr/share/ca-certificates/3785282.pem
	I1210 08:52:39.165531  598084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3785282.pem
	I1210 08:52:39.207578  598084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 08:52:39.215029  598084 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3785282.pem /etc/ssl/certs/3ec20f2e.0
	I1210 08:52:39.222290  598084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:52:39.229796  598084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 08:52:39.236899  598084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:52:39.240581  598084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 07:25 /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:52:39.240638  598084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:52:39.281369  598084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 08:52:39.288883  598084 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1210 08:52:39.296271  598084 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/378528.pem
	I1210 08:52:39.303550  598084 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/378528.pem /etc/ssl/certs/378528.pem
	I1210 08:52:39.310682  598084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378528.pem
	I1210 08:52:39.314342  598084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 07:35 /usr/share/ca-certificates/378528.pem
	I1210 08:52:39.314408  598084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378528.pem
	I1210 08:52:39.354974  598084 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 08:52:39.362461  598084 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/378528.pem /etc/ssl/certs/51391683.0
	I1210 08:52:39.369795  598084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 08:52:39.373164  598084 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 08:52:39.373206  598084 kubeadm.go:401] StartCluster: {Name:cert-expiration-682065 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-682065 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 08:52:39.373269  598084 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 08:52:39.373333  598084 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 08:52:39.399350  598084 cri.go:89] found id: ""
	I1210 08:52:39.399411  598084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 08:52:39.407167  598084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 08:52:39.414924  598084 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1210 08:52:39.414981  598084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 08:52:39.423036  598084 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 08:52:39.423045  598084 kubeadm.go:158] found existing configuration files:
	
	I1210 08:52:39.423097  598084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 08:52:39.430764  598084 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 08:52:39.430835  598084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 08:52:39.438263  598084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 08:52:39.445919  598084 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 08:52:39.445998  598084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 08:52:39.453392  598084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 08:52:39.461144  598084 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 08:52:39.461215  598084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 08:52:39.468574  598084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 08:52:39.476018  598084 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 08:52:39.476083  598084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 08:52:39.483467  598084 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1210 08:52:39.526939  598084 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1210 08:52:39.527277  598084 kubeadm.go:319] [preflight] Running pre-flight checks
	I1210 08:52:39.551180  598084 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1210 08:52:39.551243  598084 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1210 08:52:39.551276  598084 kubeadm.go:319] OS: Linux
	I1210 08:52:39.551327  598084 kubeadm.go:319] CGROUPS_CPU: enabled
	I1210 08:52:39.551374  598084 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1210 08:52:39.551420  598084 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1210 08:52:39.551467  598084 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1210 08:52:39.551514  598084 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1210 08:52:39.551561  598084 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1210 08:52:39.551605  598084 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1210 08:52:39.551660  598084 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1210 08:52:39.551705  598084 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1210 08:52:39.628547  598084 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 08:52:39.628650  598084 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 08:52:39.628756  598084 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 08:52:39.637147  598084 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 08:52:39.643478  598084 out.go:252]   - Generating certificates and keys ...
	I1210 08:52:39.643601  598084 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1210 08:52:39.643678  598084 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1210 08:52:39.903957  598084 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 08:52:40.175648  598084 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1210 08:52:40.363576  598084 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1210 08:52:40.542419  598084 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1210 08:52:41.628522  598084 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1210 08:52:41.628670  598084 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-682065 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 08:52:42.211463  598084 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1210 08:52:42.211599  598084 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-682065 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1210 08:52:42.363743  598084 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 08:52:42.784000  598084 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 08:52:43.017053  598084 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1210 08:52:43.017294  598084 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 08:52:45.057216  598084 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 08:52:45.428166  598084 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 08:52:45.516909  598084 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 08:52:45.830180  598084 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 08:52:45.977527  598084 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 08:52:45.978249  598084 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 08:52:45.981549  598084 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 08:52:49.070241  559515 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000630101s
	I1210 08:52:49.070275  559515 kubeadm.go:319] 
	I1210 08:52:49.070333  559515 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1210 08:52:49.070367  559515 kubeadm.go:319] 	- The kubelet is not running
	I1210 08:52:49.070473  559515 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 08:52:49.070479  559515 kubeadm.go:319] 
	I1210 08:52:49.070584  559515 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 08:52:49.070616  559515 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1210 08:52:49.070647  559515 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1210 08:52:49.070652  559515 kubeadm.go:319] 
	I1210 08:52:49.074274  559515 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1210 08:52:49.074694  559515 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1210 08:52:49.074802  559515 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 08:52:49.075047  559515 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1210 08:52:49.075054  559515 kubeadm.go:319] 
	I1210 08:52:49.075122  559515 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1210 08:52:49.075172  559515 kubeadm.go:403] duration metric: took 12m6.229980771s to StartCluster
	I1210 08:52:49.075205  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 08:52:49.075263  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 08:52:49.133290  559515 cri.go:89] found id: ""
	I1210 08:52:49.133314  559515 logs.go:282] 0 containers: []
	W1210 08:52:49.133323  559515 logs.go:284] No container was found matching "kube-apiserver"
	I1210 08:52:49.133330  559515 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 08:52:49.133395  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 08:52:49.179903  559515 cri.go:89] found id: ""
	I1210 08:52:49.179925  559515 logs.go:282] 0 containers: []
	W1210 08:52:49.179933  559515 logs.go:284] No container was found matching "etcd"
	I1210 08:52:49.179940  559515 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 08:52:49.179997  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 08:52:49.229615  559515 cri.go:89] found id: ""
	I1210 08:52:49.229638  559515 logs.go:282] 0 containers: []
	W1210 08:52:49.229647  559515 logs.go:284] No container was found matching "coredns"
	I1210 08:52:49.229658  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 08:52:49.229716  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 08:52:49.274404  559515 cri.go:89] found id: ""
	I1210 08:52:49.274427  559515 logs.go:282] 0 containers: []
	W1210 08:52:49.274435  559515 logs.go:284] No container was found matching "kube-scheduler"
	I1210 08:52:49.274441  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 08:52:49.274501  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 08:52:49.329110  559515 cri.go:89] found id: ""
	I1210 08:52:49.329189  559515 logs.go:282] 0 containers: []
	W1210 08:52:49.329212  559515 logs.go:284] No container was found matching "kube-proxy"
	I1210 08:52:49.329229  559515 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 08:52:49.329343  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 08:52:49.380464  559515 cri.go:89] found id: ""
	I1210 08:52:49.380538  559515 logs.go:282] 0 containers: []
	W1210 08:52:49.380579  559515 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 08:52:49.380601  559515 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 08:52:49.380689  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 08:52:49.456104  559515 cri.go:89] found id: ""
	I1210 08:52:49.456178  559515 logs.go:282] 0 containers: []
	W1210 08:52:49.456201  559515 logs.go:284] No container was found matching "kindnet"
	I1210 08:52:49.456222  559515 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 08:52:49.456332  559515 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 08:52:49.532672  559515 cri.go:89] found id: ""
	I1210 08:52:49.532761  559515 logs.go:282] 0 containers: []
	W1210 08:52:49.532785  559515 logs.go:284] No container was found matching "storage-provisioner"
	I1210 08:52:49.532826  559515 logs.go:123] Gathering logs for kubelet ...
	I1210 08:52:49.532855  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 08:52:49.635747  559515 logs.go:123] Gathering logs for dmesg ...
	I1210 08:52:49.635823  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 08:52:49.655415  559515 logs.go:123] Gathering logs for describe nodes ...
	I1210 08:52:49.655497  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 08:52:49.775827  559515 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 08:52:49.775899  559515 logs.go:123] Gathering logs for CRI-O ...
	I1210 08:52:49.775925  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 08:52:49.835125  559515 logs.go:123] Gathering logs for container status ...
	I1210 08:52:49.835200  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 08:52:49.895453  559515 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000630101s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1210 08:52:49.895511  559515 out.go:285] * 
	W1210 08:52:49.895562  559515 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000630101s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 08:52:49.895571  559515 out.go:285] * 
	W1210 08:52:49.897703  559515 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 08:52:49.903271  559515 out.go:203] 
	W1210 08:52:49.907070  559515 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000630101s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 08:52:49.907125  559515 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 08:52:49.907145  559515 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 08:52:49.910310  559515 out.go:203] 
	I1210 08:52:45.984924  598084 out.go:252]   - Booting up control plane ...
	I1210 08:52:45.985021  598084 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 08:52:45.985097  598084 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 08:52:45.985163  598084 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 08:52:46.001838  598084 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 08:52:46.001942  598084 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1210 08:52:46.011541  598084 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1210 08:52:46.011655  598084 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 08:52:46.011701  598084 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1210 08:52:46.148873  598084 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 08:52:46.148985  598084 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 08:52:47.650569  598084 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501850137s
	I1210 08:52:47.654199  598084 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1210 08:52:47.654287  598084 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1210 08:52:47.654549  598084 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1210 08:52:47.654630  598084 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Dec 10 08:40:37 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:40:37.376116372Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 10 08:40:37 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:40:37.376152229Z" level=info msg="Starting seccomp notifier watcher"
	Dec 10 08:40:37 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:40:37.376192869Z" level=info msg="Create NRI interface"
	Dec 10 08:40:37 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:40:37.376291356Z" level=info msg="built-in NRI default validator is disabled"
	Dec 10 08:40:37 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:40:37.376300226Z" level=info msg="runtime interface created"
	Dec 10 08:40:37 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:40:37.376311213Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 10 08:40:37 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:40:37.376317514Z" level=info msg="runtime interface starting up..."
	Dec 10 08:40:37 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:40:37.376324087Z" level=info msg="starting plugins..."
	Dec 10 08:40:37 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:40:37.37633783Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 10 08:40:37 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:40:37.376400526Z" level=info msg="No systemd watchdog enabled"
	Dec 10 08:40:37 kubernetes-upgrade-470056 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 10 08:44:46 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:44:46.115856055Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=6072f2a7-110d-4a62-a51b-6258be108cba name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:44:46 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:44:46.11675596Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=b9a325ff-9b08-4ec5-9b71-819252f1b482 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:44:46 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:44:46.117306774Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=4858009e-4ba5-4255-8364-06dfde3d35b5 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:44:46 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:44:46.117843558Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=1dbb3a51-0a81-4158-bcea-2ab00e0b1863 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:44:46 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:44:46.11834035Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=a94c6376-d6ae-4437-b928-3f7ef26ff96a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:44:46 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:44:46.118917905Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=147e1ea0-0548-4585-8780-10447c555c51 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:44:46 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:44:46.119597074Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=da895956-6f94-4c69-bb41-4c4493833de4 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:48:48 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:48:48.001057988Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=b33b4d18-7b7f-4ffd-a329-2dd108efea06 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:48:48 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:48:48.002691791Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=ddb4e647-ae64-47f9-8e25-7995e567af73 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:48:48 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:48:48.003589703Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=bf0e6b9d-f963-4bf1-ba74-fd1403569550 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:48:48 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:48:48.004358456Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f649f7f8-de7b-4c4f-a0c6-6a8fa907cf5d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:48:48 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:48:48.005532686Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=ab31594e-880e-4076-b6b9-6860e5f41fa7 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:48:48 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:48:48.007506718Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=ade42499-e047-4d45-9a76-b0324b9d8b2e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 08:48:48 kubernetes-upgrade-470056 crio[615]: time="2025-12-10T08:48:48.008357777Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=90948e2e-362c-4137-b7d5-25193cd7245e name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 08:15] overlayfs: idmapped layers are currently not supported
	[  +3.847763] overlayfs: idmapped layers are currently not supported
	[Dec10 08:17] overlayfs: idmapped layers are currently not supported
	[Dec10 08:18] overlayfs: idmapped layers are currently not supported
	[Dec10 08:20] overlayfs: idmapped layers are currently not supported
	[Dec10 08:24] overlayfs: idmapped layers are currently not supported
	[Dec10 08:25] overlayfs: idmapped layers are currently not supported
	[Dec10 08:26] overlayfs: idmapped layers are currently not supported
	[Dec10 08:27] overlayfs: idmapped layers are currently not supported
	[Dec10 08:28] overlayfs: idmapped layers are currently not supported
	[Dec10 08:30] overlayfs: idmapped layers are currently not supported
	[ +17.507086] overlayfs: idmapped layers are currently not supported
	[Dec10 08:31] overlayfs: idmapped layers are currently not supported
	[ +48.274286] overlayfs: idmapped layers are currently not supported
	[Dec10 08:32] overlayfs: idmapped layers are currently not supported
	[ +48.206918] overlayfs: idmapped layers are currently not supported
	[Dec10 08:33] overlayfs: idmapped layers are currently not supported
	[Dec10 08:34] overlayfs: idmapped layers are currently not supported
	[Dec10 08:35] overlayfs: idmapped layers are currently not supported
	[Dec10 08:37] overlayfs: idmapped layers are currently not supported
	[  +1.400670] overlayfs: idmapped layers are currently not supported
	[Dec10 08:40] overlayfs: idmapped layers are currently not supported
	[Dec10 08:51] overlayfs: idmapped layers are currently not supported
	[Dec10 08:52] overlayfs: idmapped layers are currently not supported
	[ +34.378489] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 08:52:52 up  3:35,  0 user,  load average: 2.41, 1.56, 1.58
	Linux kubernetes-upgrade-470056 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 10 08:52:49 kubernetes-upgrade-470056 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:52:50 kubernetes-upgrade-470056 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 960.
	Dec 10 08:52:50 kubernetes-upgrade-470056 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:52:50 kubernetes-upgrade-470056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:52:50 kubernetes-upgrade-470056 kubelet[12221]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:52:50 kubernetes-upgrade-470056 kubelet[12221]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:52:50 kubernetes-upgrade-470056 kubelet[12221]: E1210 08:52:50.314289   12221 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:52:50 kubernetes-upgrade-470056 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:52:50 kubernetes-upgrade-470056 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:52:51 kubernetes-upgrade-470056 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 961.
	Dec 10 08:52:51 kubernetes-upgrade-470056 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:52:51 kubernetes-upgrade-470056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:52:51 kubernetes-upgrade-470056 kubelet[12226]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:52:51 kubernetes-upgrade-470056 kubelet[12226]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:52:51 kubernetes-upgrade-470056 kubelet[12226]: E1210 08:52:51.305402   12226 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:52:51 kubernetes-upgrade-470056 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:52:51 kubernetes-upgrade-470056 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 08:52:52 kubernetes-upgrade-470056 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 10 08:52:52 kubernetes-upgrade-470056 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:52:52 kubernetes-upgrade-470056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 10 08:52:52 kubernetes-upgrade-470056 kubelet[12267]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:52:52 kubernetes-upgrade-470056 kubelet[12267]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 10 08:52:52 kubernetes-upgrade-470056 kubelet[12267]: E1210 08:52:52.230603   12267 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 10 08:52:52 kubernetes-upgrade-470056 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 08:52:52 kubernetes-upgrade-470056 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-470056 -n kubernetes-upgrade-470056
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-470056 -n kubernetes-upgrade-470056: exit status 2 (322.935682ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-470056" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-470056" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-470056
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-470056: (2.557886043s)
--- FAIL: TestKubernetesUpgrade (780.92s)

                                                
                                    
x
+
TestPause/serial/Pause (6.87s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-767596 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-767596 --alsologtostderr -v=5: exit status 80 (1.98766695s)

                                                
                                                
-- stdout --
	* Pausing node pause-767596 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 08:40:44.444434  560935 out.go:360] Setting OutFile to fd 1 ...
	I1210 08:40:44.445463  560935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:40:44.445518  560935 out.go:374] Setting ErrFile to fd 2...
	I1210 08:40:44.445540  560935 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:40:44.446036  560935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 08:40:44.446683  560935 out.go:368] Setting JSON to false
	I1210 08:40:44.446801  560935 mustload.go:66] Loading cluster: pause-767596
	I1210 08:40:44.447407  560935 config.go:182] Loaded profile config "pause-767596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 08:40:44.448029  560935 cli_runner.go:164] Run: docker container inspect pause-767596 --format={{.State.Status}}
	I1210 08:40:44.473103  560935 host.go:66] Checking if "pause-767596" exists ...
	I1210 08:40:44.473421  560935 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 08:40:44.568799  560935 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-12-10 08:40:44.556674908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 08:40:44.569559  560935 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765151505-21409/minikube-v1.37.0-1765151505-21409-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765151505-21409-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-767596 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1210 08:40:44.572885  560935 out.go:179] * Pausing node pause-767596 ... 
	I1210 08:40:44.576594  560935 host.go:66] Checking if "pause-767596" exists ...
	I1210 08:40:44.576992  560935 ssh_runner.go:195] Run: systemctl --version
	I1210 08:40:44.577056  560935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-767596
	I1210 08:40:44.603353  560935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33358 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/pause-767596/id_rsa Username:docker}
	I1210 08:40:44.706396  560935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 08:40:44.720752  560935 pause.go:52] kubelet running: true
	I1210 08:40:44.720868  560935 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 08:40:44.990880  560935 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 08:40:44.991006  560935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 08:40:45.116708  560935 cri.go:89] found id: "0cbf424c207d46886a3438d2ac328e0162fc287f9259049da31f809885fae9ca"
	I1210 08:40:45.116788  560935 cri.go:89] found id: "23d088000dc21df1eea635b974d19360a5e49f3f60043678d31c51ca0026677f"
	I1210 08:40:45.116808  560935 cri.go:89] found id: "195fe55f5ac4a49a5f8ab6ad8e4d47c8eda98becee1742359aeff6075452e689"
	I1210 08:40:45.116826  560935 cri.go:89] found id: "d6c7b4fc3031146f4408f3b9fc1855ea7b8c17881d51899d60b47dc3ac1095ad"
	I1210 08:40:45.116846  560935 cri.go:89] found id: "d7674cb30a04349c5eac365914ffd12f57c54b30f34067fb8aaca29e1c1f8db6"
	I1210 08:40:45.116880  560935 cri.go:89] found id: "3041c7738207a5c5b2d647d810aafb92c70d64ab842ac4e8145635706f896e61"
	I1210 08:40:45.116904  560935 cri.go:89] found id: "4d1d5992cc4a7b009e9355692324b93c4073f0c07db670f0d6fb5415eb94e688"
	I1210 08:40:45.116927  560935 cri.go:89] found id: "a6d2f0e2f3d9d16f6e4e55ca9f54961f4e0d50dcc4d20da2b0505e4622433a37"
	I1210 08:40:45.116945  560935 cri.go:89] found id: "794b94a9f483ad2f1228b1f4bcff462c3ed700eb0451dc2e54464daa7fb66455"
	I1210 08:40:45.116968  560935 cri.go:89] found id: "2e2fa03d36412a2c91c2f76d767f6e2e9e591aa5b021bfb00f6f513ec0bfd307"
	I1210 08:40:45.117000  560935 cri.go:89] found id: "022741f8c7a581c5c492c4ae38c1775fb76122b7b80fa353571b73866aeabdcb"
	I1210 08:40:45.117017  560935 cri.go:89] found id: "b9c23cab83e3933a35721d41b4a1c9d16c345dc93c9e4f506985082ad0b659b0"
	I1210 08:40:45.117037  560935 cri.go:89] found id: "c1c1db525792cee93534a1a2d8bc9e289bb8e44a3c18c1a2962887ee83192854"
	I1210 08:40:45.117055  560935 cri.go:89] found id: "d799933dbf1f73062115abbf6e2ba939f1de6386c5fd7cc217bae3a3989ffcf5"
	I1210 08:40:45.117083  560935 cri.go:89] found id: "81209469877d513d99fb101d1a10c2ce3527b803a1a2632345426450f209d461"
	I1210 08:40:45.117110  560935 cri.go:89] found id: "f04f090d9d5b769dca360fb021e4a3baab548c08d62a34df27003779c331b1d6"
	I1210 08:40:45.117131  560935 cri.go:89] found id: ""
	I1210 08:40:45.117241  560935 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 08:40:45.131861  560935 retry.go:31] will retry after 235.36156ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T08:40:45Z" level=error msg="open /run/runc: no such file or directory"
	I1210 08:40:45.368239  560935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 08:40:45.380956  560935 pause.go:52] kubelet running: false
	I1210 08:40:45.381022  560935 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 08:40:45.516223  560935 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 08:40:45.516384  560935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 08:40:45.579471  560935 cri.go:89] found id: "0cbf424c207d46886a3438d2ac328e0162fc287f9259049da31f809885fae9ca"
	I1210 08:40:45.579492  560935 cri.go:89] found id: "23d088000dc21df1eea635b974d19360a5e49f3f60043678d31c51ca0026677f"
	I1210 08:40:45.579497  560935 cri.go:89] found id: "195fe55f5ac4a49a5f8ab6ad8e4d47c8eda98becee1742359aeff6075452e689"
	I1210 08:40:45.579501  560935 cri.go:89] found id: "d6c7b4fc3031146f4408f3b9fc1855ea7b8c17881d51899d60b47dc3ac1095ad"
	I1210 08:40:45.579504  560935 cri.go:89] found id: "d7674cb30a04349c5eac365914ffd12f57c54b30f34067fb8aaca29e1c1f8db6"
	I1210 08:40:45.579507  560935 cri.go:89] found id: "3041c7738207a5c5b2d647d810aafb92c70d64ab842ac4e8145635706f896e61"
	I1210 08:40:45.579511  560935 cri.go:89] found id: "4d1d5992cc4a7b009e9355692324b93c4073f0c07db670f0d6fb5415eb94e688"
	I1210 08:40:45.579514  560935 cri.go:89] found id: "a6d2f0e2f3d9d16f6e4e55ca9f54961f4e0d50dcc4d20da2b0505e4622433a37"
	I1210 08:40:45.579517  560935 cri.go:89] found id: "794b94a9f483ad2f1228b1f4bcff462c3ed700eb0451dc2e54464daa7fb66455"
	I1210 08:40:45.579533  560935 cri.go:89] found id: "2e2fa03d36412a2c91c2f76d767f6e2e9e591aa5b021bfb00f6f513ec0bfd307"
	I1210 08:40:45.579536  560935 cri.go:89] found id: "022741f8c7a581c5c492c4ae38c1775fb76122b7b80fa353571b73866aeabdcb"
	I1210 08:40:45.579540  560935 cri.go:89] found id: "b9c23cab83e3933a35721d41b4a1c9d16c345dc93c9e4f506985082ad0b659b0"
	I1210 08:40:45.579543  560935 cri.go:89] found id: "c1c1db525792cee93534a1a2d8bc9e289bb8e44a3c18c1a2962887ee83192854"
	I1210 08:40:45.579546  560935 cri.go:89] found id: "d799933dbf1f73062115abbf6e2ba939f1de6386c5fd7cc217bae3a3989ffcf5"
	I1210 08:40:45.579549  560935 cri.go:89] found id: "81209469877d513d99fb101d1a10c2ce3527b803a1a2632345426450f209d461"
	I1210 08:40:45.579554  560935 cri.go:89] found id: "f04f090d9d5b769dca360fb021e4a3baab548c08d62a34df27003779c331b1d6"
	I1210 08:40:45.579557  560935 cri.go:89] found id: ""
	I1210 08:40:45.579611  560935 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 08:40:45.590449  560935 retry.go:31] will retry after 473.509381ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T08:40:45Z" level=error msg="open /run/runc: no such file or directory"
	I1210 08:40:46.064145  560935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 08:40:46.077739  560935 pause.go:52] kubelet running: false
	I1210 08:40:46.077805  560935 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1210 08:40:46.245028  560935 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1210 08:40:46.245112  560935 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1210 08:40:46.319579  560935 cri.go:89] found id: "0cbf424c207d46886a3438d2ac328e0162fc287f9259049da31f809885fae9ca"
	I1210 08:40:46.319601  560935 cri.go:89] found id: "23d088000dc21df1eea635b974d19360a5e49f3f60043678d31c51ca0026677f"
	I1210 08:40:46.319616  560935 cri.go:89] found id: "195fe55f5ac4a49a5f8ab6ad8e4d47c8eda98becee1742359aeff6075452e689"
	I1210 08:40:46.319620  560935 cri.go:89] found id: "d6c7b4fc3031146f4408f3b9fc1855ea7b8c17881d51899d60b47dc3ac1095ad"
	I1210 08:40:46.319624  560935 cri.go:89] found id: "d7674cb30a04349c5eac365914ffd12f57c54b30f34067fb8aaca29e1c1f8db6"
	I1210 08:40:46.319628  560935 cri.go:89] found id: "3041c7738207a5c5b2d647d810aafb92c70d64ab842ac4e8145635706f896e61"
	I1210 08:40:46.319631  560935 cri.go:89] found id: "4d1d5992cc4a7b009e9355692324b93c4073f0c07db670f0d6fb5415eb94e688"
	I1210 08:40:46.319634  560935 cri.go:89] found id: "a6d2f0e2f3d9d16f6e4e55ca9f54961f4e0d50dcc4d20da2b0505e4622433a37"
	I1210 08:40:46.319637  560935 cri.go:89] found id: "794b94a9f483ad2f1228b1f4bcff462c3ed700eb0451dc2e54464daa7fb66455"
	I1210 08:40:46.319644  560935 cri.go:89] found id: "2e2fa03d36412a2c91c2f76d767f6e2e9e591aa5b021bfb00f6f513ec0bfd307"
	I1210 08:40:46.319648  560935 cri.go:89] found id: "022741f8c7a581c5c492c4ae38c1775fb76122b7b80fa353571b73866aeabdcb"
	I1210 08:40:46.319651  560935 cri.go:89] found id: "b9c23cab83e3933a35721d41b4a1c9d16c345dc93c9e4f506985082ad0b659b0"
	I1210 08:40:46.319654  560935 cri.go:89] found id: "c1c1db525792cee93534a1a2d8bc9e289bb8e44a3c18c1a2962887ee83192854"
	I1210 08:40:46.319656  560935 cri.go:89] found id: "d799933dbf1f73062115abbf6e2ba939f1de6386c5fd7cc217bae3a3989ffcf5"
	I1210 08:40:46.319660  560935 cri.go:89] found id: "81209469877d513d99fb101d1a10c2ce3527b803a1a2632345426450f209d461"
	I1210 08:40:46.319664  560935 cri.go:89] found id: "f04f090d9d5b769dca360fb021e4a3baab548c08d62a34df27003779c331b1d6"
	I1210 08:40:46.319673  560935 cri.go:89] found id: ""
	I1210 08:40:46.319735  560935 ssh_runner.go:195] Run: sudo runc list -f json
	I1210 08:40:46.333978  560935 out.go:203] 
	W1210 08:40:46.336899  560935 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T08:40:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T08:40:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1210 08:40:46.336923  560935 out.go:285] * 
	* 
	W1210 08:40:46.343210  560935 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 08:40:46.346070  560935 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-767596 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-767596
helpers_test.go:244: (dbg) docker inspect pause-767596:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8db1a7a9ae811250f9311f7c797b92bccee36fcc2c60f1cc23fc89d9107b9e32",
	        "Created": "2025-12-10T08:37:19.219959796Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 542571,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T08:37:19.306831014Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/8db1a7a9ae811250f9311f7c797b92bccee36fcc2c60f1cc23fc89d9107b9e32/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8db1a7a9ae811250f9311f7c797b92bccee36fcc2c60f1cc23fc89d9107b9e32/hostname",
	        "HostsPath": "/var/lib/docker/containers/8db1a7a9ae811250f9311f7c797b92bccee36fcc2c60f1cc23fc89d9107b9e32/hosts",
	        "LogPath": "/var/lib/docker/containers/8db1a7a9ae811250f9311f7c797b92bccee36fcc2c60f1cc23fc89d9107b9e32/8db1a7a9ae811250f9311f7c797b92bccee36fcc2c60f1cc23fc89d9107b9e32-json.log",
	        "Name": "/pause-767596",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-767596:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-767596",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8db1a7a9ae811250f9311f7c797b92bccee36fcc2c60f1cc23fc89d9107b9e32",
	                "LowerDir": "/var/lib/docker/overlay2/883271cb72aed4d4c78b44c720149d5c8f8b74d226e433cbc5b02384bf5bcedd-init/diff:/var/lib/docker/overlay2/888a54fd4518421bb2e30bc2f0825f232bffa2f260991e8ba288270662a6554b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/883271cb72aed4d4c78b44c720149d5c8f8b74d226e433cbc5b02384bf5bcedd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/883271cb72aed4d4c78b44c720149d5c8f8b74d226e433cbc5b02384bf5bcedd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/883271cb72aed4d4c78b44c720149d5c8f8b74d226e433cbc5b02384bf5bcedd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-767596",
	                "Source": "/var/lib/docker/volumes/pause-767596/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-767596",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-767596",
	                "name.minikube.sigs.k8s.io": "pause-767596",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "97766222e69a3d0e70fa10b2de4407da832a1725dbe20f22a0e98ae150791032",
	            "SandboxKey": "/var/run/docker/netns/97766222e69a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33358"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33359"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33362"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33360"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33361"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-767596": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:da:52:90:93:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e31929f798a32ed6a503ac5696c09fd3543da3dc8796223a5f988246910e7569",
	                    "EndpointID": "b33a2843877ebe4d47b18461d7dee1f84c50fafa2d163db899373dac98e1d9c0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-767596",
	                        "8db1a7a9ae81"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-767596 -n pause-767596
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-767596 -n pause-767596: exit status 2 (348.845476ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-767596 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-767596 logs -n 25: (1.576858044s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-500883 --schedule 15s -v=5 --alsologtostderr                                                                                  │ scheduled-stop-500883       │ jenkins │ v1.37.0 │ 10 Dec 25 08:36 UTC │                     │
	│ stop    │ -p scheduled-stop-500883 --schedule 15s -v=5 --alsologtostderr                                                                                  │ scheduled-stop-500883       │ jenkins │ v1.37.0 │ 10 Dec 25 08:36 UTC │                     │
	│ stop    │ -p scheduled-stop-500883 --schedule 15s -v=5 --alsologtostderr                                                                                  │ scheduled-stop-500883       │ jenkins │ v1.37.0 │ 10 Dec 25 08:36 UTC │ 10 Dec 25 08:36 UTC │
	│ delete  │ -p scheduled-stop-500883                                                                                                                        │ scheduled-stop-500883       │ jenkins │ v1.37.0 │ 10 Dec 25 08:36 UTC │ 10 Dec 25 08:36 UTC │
	│ start   │ -p insufficient-storage-055567 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                                │ insufficient-storage-055567 │ jenkins │ v1.37.0 │ 10 Dec 25 08:36 UTC │                     │
	│ delete  │ -p insufficient-storage-055567                                                                                                                  │ insufficient-storage-055567 │ jenkins │ v1.37.0 │ 10 Dec 25 08:37 UTC │ 10 Dec 25 08:37 UTC │
	│ start   │ -p NoKubernetes-783391 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                   │ NoKubernetes-783391         │ jenkins │ v1.37.0 │ 10 Dec 25 08:37 UTC │                     │
	│ start   │ -p pause-767596 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-767596                │ jenkins │ v1.37.0 │ 10 Dec 25 08:37 UTC │ 10 Dec 25 08:38 UTC │
	│ start   │ -p NoKubernetes-783391 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-783391         │ jenkins │ v1.37.0 │ 10 Dec 25 08:37 UTC │ 10 Dec 25 08:37 UTC │
	│ start   │ -p NoKubernetes-783391 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-783391         │ jenkins │ v1.37.0 │ 10 Dec 25 08:37 UTC │ 10 Dec 25 08:37 UTC │
	│ delete  │ -p NoKubernetes-783391                                                                                                                          │ NoKubernetes-783391         │ jenkins │ v1.37.0 │ 10 Dec 25 08:37 UTC │ 10 Dec 25 08:37 UTC │
	│ start   │ -p NoKubernetes-783391 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-783391         │ jenkins │ v1.37.0 │ 10 Dec 25 08:37 UTC │ 10 Dec 25 08:38 UTC │
	│ ssh     │ -p NoKubernetes-783391 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-783391         │ jenkins │ v1.37.0 │ 10 Dec 25 08:38 UTC │                     │
	│ stop    │ -p NoKubernetes-783391                                                                                                                          │ NoKubernetes-783391         │ jenkins │ v1.37.0 │ 10 Dec 25 08:38 UTC │ 10 Dec 25 08:38 UTC │
	│ start   │ -p NoKubernetes-783391 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-783391         │ jenkins │ v1.37.0 │ 10 Dec 25 08:38 UTC │ 10 Dec 25 08:38 UTC │
	│ ssh     │ -p NoKubernetes-783391 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-783391         │ jenkins │ v1.37.0 │ 10 Dec 25 08:38 UTC │                     │
	│ delete  │ -p NoKubernetes-783391                                                                                                                          │ NoKubernetes-783391         │ jenkins │ v1.37.0 │ 10 Dec 25 08:38 UTC │ 10 Dec 25 08:38 UTC │
	│ start   │ -p missing-upgrade-317974 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-317974      │ jenkins │ v1.35.0 │ 10 Dec 25 08:38 UTC │ 10 Dec 25 08:39 UTC │
	│ start   │ -p pause-767596 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-767596                │ jenkins │ v1.37.0 │ 10 Dec 25 08:38 UTC │ 10 Dec 25 08:40 UTC │
	│ start   │ -p missing-upgrade-317974 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-317974      │ jenkins │ v1.37.0 │ 10 Dec 25 08:39 UTC │ 10 Dec 25 08:39 UTC │
	│ delete  │ -p missing-upgrade-317974                                                                                                                       │ missing-upgrade-317974      │ jenkins │ v1.37.0 │ 10 Dec 25 08:39 UTC │ 10 Dec 25 08:39 UTC │
	│ start   │ -p kubernetes-upgrade-470056 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-470056   │ jenkins │ v1.37.0 │ 10 Dec 25 08:39 UTC │ 10 Dec 25 08:40 UTC │
	│ stop    │ -p kubernetes-upgrade-470056                                                                                                                    │ kubernetes-upgrade-470056   │ jenkins │ v1.37.0 │ 10 Dec 25 08:40 UTC │ 10 Dec 25 08:40 UTC │
	│ start   │ -p kubernetes-upgrade-470056 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-470056   │ jenkins │ v1.37.0 │ 10 Dec 25 08:40 UTC │                     │
	│ pause   │ -p pause-767596 --alsologtostderr -v=5                                                                                                          │ pause-767596                │ jenkins │ v1.37.0 │ 10 Dec 25 08:40 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 08:40:30
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 08:40:30.040162  559515 out.go:360] Setting OutFile to fd 1 ...
	I1210 08:40:30.040316  559515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:40:30.040328  559515 out.go:374] Setting ErrFile to fd 2...
	I1210 08:40:30.040334  559515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:40:30.040721  559515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 08:40:30.041220  559515 out.go:368] Setting JSON to false
	I1210 08:40:30.042306  559515 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12180,"bootTime":1765343850,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 08:40:30.042423  559515 start.go:143] virtualization:  
	I1210 08:40:30.051546  559515 out.go:179] * [kubernetes-upgrade-470056] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 08:40:30.054446  559515 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 08:40:30.054564  559515 notify.go:221] Checking for updates...
	I1210 08:40:30.060171  559515 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 08:40:30.062908  559515 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 08:40:30.065796  559515 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 08:40:30.068574  559515 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 08:40:30.071374  559515 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 08:40:30.074806  559515 config.go:182] Loaded profile config "kubernetes-upgrade-470056": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 08:40:30.075505  559515 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 08:40:30.145265  559515 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 08:40:30.145438  559515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 08:40:30.244737  559515 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 08:40:30.2317479 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 08:40:30.244848  559515 docker.go:319] overlay module found
	I1210 08:40:30.247847  559515 out.go:179] * Using the docker driver based on existing profile
	I1210 08:40:30.250588  559515 start.go:309] selected driver: docker
	I1210 08:40:30.250608  559515 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-470056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-470056 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 08:40:30.250732  559515 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 08:40:30.251479  559515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 08:40:30.368120  559515 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 08:40:30.354165413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 08:40:30.368455  559515 cni.go:84] Creating CNI manager for ""
	I1210 08:40:30.368523  559515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 08:40:30.368566  559515 start.go:353] cluster config:
	{Name:kubernetes-upgrade-470056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-470056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 08:40:30.371722  559515 out.go:179] * Starting "kubernetes-upgrade-470056" primary control-plane node in "kubernetes-upgrade-470056" cluster
	I1210 08:40:30.374452  559515 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 08:40:30.377369  559515 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 08:40:30.380261  559515 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 08:40:30.380315  559515 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1210 08:40:30.380330  559515 cache.go:65] Caching tarball of preloaded images
	I1210 08:40:30.380421  559515 preload.go:238] Found /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1210 08:40:30.380436  559515 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 08:40:30.380553  559515 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/config.json ...
	I1210 08:40:30.380752  559515 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 08:40:30.410489  559515 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 08:40:30.410515  559515 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 08:40:30.410530  559515 cache.go:243] Successfully downloaded all kic artifacts
	I1210 08:40:30.410565  559515 start.go:360] acquireMachinesLock for kubernetes-upgrade-470056: {Name:mk76103b2f0fae4fa69e0d1baba03cd5feffd6fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 08:40:30.410624  559515 start.go:364] duration metric: took 35.045µs to acquireMachinesLock for "kubernetes-upgrade-470056"
	I1210 08:40:30.410645  559515 start.go:96] Skipping create...Using existing machine configuration
	I1210 08:40:30.410655  559515 fix.go:54] fixHost starting: 
	I1210 08:40:30.410917  559515 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-470056 --format={{.State.Status}}
	I1210 08:40:30.440722  559515 fix.go:112] recreateIfNeeded on kubernetes-upgrade-470056: state=Stopped err=<nil>
	W1210 08:40:30.440757  559515 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 08:40:30.443984  559515 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-470056" ...
	I1210 08:40:30.444088  559515 cli_runner.go:164] Run: docker start kubernetes-upgrade-470056
	I1210 08:40:30.811799  559515 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-470056 --format={{.State.Status}}
	I1210 08:40:30.848700  559515 kic.go:430] container "kubernetes-upgrade-470056" state is running.
	I1210 08:40:30.849107  559515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-470056
	I1210 08:40:30.887232  559515 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/config.json ...
	I1210 08:40:30.887464  559515 machine.go:94] provisionDockerMachine start ...
	I1210 08:40:30.887537  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:30.917139  559515 main.go:143] libmachine: Using SSH client type: native
	I1210 08:40:30.917487  559515 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1210 08:40:30.917497  559515 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 08:40:30.918619  559515 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 08:40:34.090535  559515 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-470056
	
	I1210 08:40:34.090615  559515 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-470056"
	I1210 08:40:34.090716  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:34.128340  559515 main.go:143] libmachine: Using SSH client type: native
	I1210 08:40:34.128657  559515 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1210 08:40:34.128673  559515 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-470056 && echo "kubernetes-upgrade-470056" | sudo tee /etc/hostname
	I1210 08:40:34.297852  559515 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-470056
	
	I1210 08:40:34.298008  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:34.376678  559515 main.go:143] libmachine: Using SSH client type: native
	I1210 08:40:34.376987  559515 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1210 08:40:34.377003  559515 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-470056' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-470056/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-470056' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 08:40:34.547600  559515 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 08:40:34.547679  559515 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-376671/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-376671/.minikube}
	I1210 08:40:34.547721  559515 ubuntu.go:190] setting up certificates
	I1210 08:40:34.547762  559515 provision.go:84] configureAuth start
	I1210 08:40:34.547888  559515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-470056
	I1210 08:40:34.573984  559515 provision.go:143] copyHostCerts
	I1210 08:40:34.574057  559515 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem, removing ...
	I1210 08:40:34.574066  559515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem
	I1210 08:40:34.574142  559515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem (1123 bytes)
	I1210 08:40:34.574236  559515 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem, removing ...
	I1210 08:40:34.574241  559515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem
	I1210 08:40:34.574266  559515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem (1675 bytes)
	I1210 08:40:34.574315  559515 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem, removing ...
	I1210 08:40:34.574320  559515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem
	I1210 08:40:34.574343  559515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem (1082 bytes)
	I1210 08:40:34.574398  559515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-470056 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-470056 localhost minikube]
	I1210 08:40:34.686160  559515 provision.go:177] copyRemoteCerts
	I1210 08:40:34.686273  559515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 08:40:34.686347  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:34.722343  559515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/kubernetes-upgrade-470056/id_rsa Username:docker}
	I1210 08:40:34.832542  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 08:40:34.861570  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1210 08:40:34.896052  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 08:40:34.921367  559515 provision.go:87] duration metric: took 373.563126ms to configureAuth
	I1210 08:40:34.921450  559515 ubuntu.go:206] setting minikube options for container-runtime
	I1210 08:40:34.921682  559515 config.go:182] Loaded profile config "kubernetes-upgrade-470056": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 08:40:34.921855  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:34.946408  559515 main.go:143] libmachine: Using SSH client type: native
	I1210 08:40:34.946719  559515 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1210 08:40:34.946736  559515 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 08:40:34.617615  550743 node_ready.go:49] node "pause-767596" is "Ready"
	I1210 08:40:34.617651  550743 node_ready.go:38] duration metric: took 10.892704746s for node "pause-767596" to be "Ready" ...
	I1210 08:40:34.617666  550743 api_server.go:52] waiting for apiserver process to appear ...
	I1210 08:40:34.617738  550743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:34.646308  550743 api_server.go:72] duration metric: took 11.238014636s to wait for apiserver process to appear ...
	I1210 08:40:34.646335  550743 api_server.go:88] waiting for apiserver healthz status ...
	I1210 08:40:34.646354  550743 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 08:40:34.738851  550743 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 08:40:34.738885  550743 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 08:40:35.147100  550743 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 08:40:35.165835  550743 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 08:40:35.165928  550743 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 08:40:35.646486  550743 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 08:40:35.654761  550743 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 08:40:35.654839  550743 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 08:40:35.353912  559515 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 08:40:35.353936  559515 machine.go:97] duration metric: took 4.466461328s to provisionDockerMachine
	I1210 08:40:35.353947  559515 start.go:293] postStartSetup for "kubernetes-upgrade-470056" (driver="docker")
	I1210 08:40:35.353983  559515 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 08:40:35.354100  559515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 08:40:35.354167  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:35.374974  559515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/kubernetes-upgrade-470056/id_rsa Username:docker}
	I1210 08:40:35.493386  559515 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 08:40:35.497233  559515 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 08:40:35.497304  559515 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 08:40:35.497336  559515 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/addons for local assets ...
	I1210 08:40:35.497408  559515 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/files for local assets ...
	I1210 08:40:35.497512  559515 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> 3785282.pem in /etc/ssl/certs
	I1210 08:40:35.497643  559515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 08:40:35.510121  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 08:40:35.541645  559515 start.go:296] duration metric: took 187.683107ms for postStartSetup
	I1210 08:40:35.541805  559515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 08:40:35.541878  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:35.569359  559515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/kubernetes-upgrade-470056/id_rsa Username:docker}
	I1210 08:40:35.679440  559515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 08:40:35.687359  559515 fix.go:56] duration metric: took 5.276695711s for fixHost
	I1210 08:40:35.687391  559515 start.go:83] releasing machines lock for "kubernetes-upgrade-470056", held for 5.276755272s
	I1210 08:40:35.687546  559515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-470056
	I1210 08:40:35.715293  559515 ssh_runner.go:195] Run: cat /version.json
	I1210 08:40:35.715345  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:35.715591  559515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 08:40:35.715643  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:35.747155  559515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/kubernetes-upgrade-470056/id_rsa Username:docker}
	I1210 08:40:35.760447  559515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/kubernetes-upgrade-470056/id_rsa Username:docker}
	I1210 08:40:35.875652  559515 ssh_runner.go:195] Run: systemctl --version
	I1210 08:40:35.992980  559515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 08:40:36.060420  559515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 08:40:36.068629  559515 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 08:40:36.068701  559515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 08:40:36.080402  559515 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 08:40:36.080426  559515 start.go:496] detecting cgroup driver to use...
	I1210 08:40:36.080457  559515 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 08:40:36.080505  559515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 08:40:36.097918  559515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 08:40:36.113386  559515 docker.go:218] disabling cri-docker service (if available) ...
	I1210 08:40:36.113506  559515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 08:40:36.140941  559515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 08:40:36.164336  559515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 08:40:36.341219  559515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 08:40:36.510314  559515 docker.go:234] disabling docker service ...
	I1210 08:40:36.510436  559515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 08:40:36.544014  559515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 08:40:36.565058  559515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 08:40:36.755122  559515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 08:40:36.954246  559515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 08:40:36.973758  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 08:40:36.998372  559515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 08:40:36.998450  559515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:40:37.011466  559515 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 08:40:37.011569  559515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:40:37.024093  559515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:40:37.038124  559515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:40:37.050180  559515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 08:40:37.059304  559515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:40:37.069187  559515 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:40:37.078300  559515 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:40:37.088529  559515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 08:40:37.097305  559515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 08:40:37.105542  559515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 08:40:37.219171  559515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 08:40:37.383269  559515 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 08:40:37.383392  559515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 08:40:37.387716  559515 start.go:564] Will wait 60s for crictl version
	I1210 08:40:37.387825  559515 ssh_runner.go:195] Run: which crictl
	I1210 08:40:37.392809  559515 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 08:40:37.427720  559515 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 08:40:37.427849  559515 ssh_runner.go:195] Run: crio --version
	I1210 08:40:37.463991  559515 ssh_runner.go:195] Run: crio --version
	I1210 08:40:37.501980  559515 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 08:40:37.505579  559515 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-470056 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 08:40:37.527286  559515 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 08:40:37.531117  559515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 08:40:37.540886  559515 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-470056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-470056 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 08:40:37.541005  559515 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 08:40:37.541068  559515 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 08:40:37.575189  559515 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1210 08:40:37.575294  559515 ssh_runner.go:195] Run: which lz4
	I1210 08:40:37.579111  559515 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 08:40:37.583005  559515 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 08:40:37.583068  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (306100841 bytes)
	I1210 08:40:39.230262  559515 crio.go:462] duration metric: took 1.651193004s to copy over tarball
	I1210 08:40:39.230351  559515 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 08:40:36.147310  550743 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 08:40:36.178199  550743 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1210 08:40:36.184682  550743 api_server.go:141] control plane version: v1.34.2
	I1210 08:40:36.184709  550743 api_server.go:131] duration metric: took 1.538367022s to wait for apiserver health ...
	I1210 08:40:36.184717  550743 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 08:40:36.197515  550743 system_pods.go:59] 7 kube-system pods found
	I1210 08:40:36.197552  550743 system_pods.go:61] "coredns-66bc5c9577-7r54s" [621f03c5-af3a-4a79-9ee4-b28c576f8a3a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 08:40:36.197560  550743 system_pods.go:61] "etcd-pause-767596" [a7bd6225-3434-45f5-93b9-e831b00e11ca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 08:40:36.197566  550743 system_pods.go:61] "kindnet-kx2vt" [475678b0-6300-4037-8f4c-dbc6a7b12cb7] Running
	I1210 08:40:36.197572  550743 system_pods.go:61] "kube-apiserver-pause-767596" [16b63525-5339-4b20-845a-d458efe96c7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 08:40:36.197578  550743 system_pods.go:61] "kube-controller-manager-pause-767596" [c2cc657a-37f3-4a66-82a7-eba45b2fcbfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 08:40:36.197582  550743 system_pods.go:61] "kube-proxy-p4g2s" [6a1da39c-5fd9-47dc-8874-9dc207729443] Running
	I1210 08:40:36.197587  550743 system_pods.go:61] "kube-scheduler-pause-767596" [4f7090fa-0901-487a-bd5e-b4b358773fa0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 08:40:36.197593  550743 system_pods.go:74] duration metric: took 12.870088ms to wait for pod list to return data ...
	I1210 08:40:36.197601  550743 default_sa.go:34] waiting for default service account to be created ...
	I1210 08:40:36.204915  550743 default_sa.go:45] found service account: "default"
	I1210 08:40:36.204937  550743 default_sa.go:55] duration metric: took 7.330159ms for default service account to be created ...
	I1210 08:40:36.204947  550743 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 08:40:36.294173  550743 system_pods.go:86] 7 kube-system pods found
	I1210 08:40:36.294268  550743 system_pods.go:89] "coredns-66bc5c9577-7r54s" [621f03c5-af3a-4a79-9ee4-b28c576f8a3a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 08:40:36.294292  550743 system_pods.go:89] "etcd-pause-767596" [a7bd6225-3434-45f5-93b9-e831b00e11ca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 08:40:36.294333  550743 system_pods.go:89] "kindnet-kx2vt" [475678b0-6300-4037-8f4c-dbc6a7b12cb7] Running
	I1210 08:40:36.294359  550743 system_pods.go:89] "kube-apiserver-pause-767596" [16b63525-5339-4b20-845a-d458efe96c7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 08:40:36.294382  550743 system_pods.go:89] "kube-controller-manager-pause-767596" [c2cc657a-37f3-4a66-82a7-eba45b2fcbfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 08:40:36.294417  550743 system_pods.go:89] "kube-proxy-p4g2s" [6a1da39c-5fd9-47dc-8874-9dc207729443] Running
	I1210 08:40:36.294441  550743 system_pods.go:89] "kube-scheduler-pause-767596" [4f7090fa-0901-487a-bd5e-b4b358773fa0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 08:40:36.294463  550743 system_pods.go:126] duration metric: took 89.509985ms to wait for k8s-apps to be running ...
	I1210 08:40:36.294500  550743 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 08:40:36.294587  550743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 08:40:36.323019  550743 system_svc.go:56] duration metric: took 28.50256ms WaitForService to wait for kubelet
	I1210 08:40:36.323095  550743 kubeadm.go:587] duration metric: took 12.914806514s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 08:40:36.323132  550743 node_conditions.go:102] verifying NodePressure condition ...
	I1210 08:40:36.331911  550743 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1210 08:40:36.331989  550743 node_conditions.go:123] node cpu capacity is 2
	I1210 08:40:36.332041  550743 node_conditions.go:105] duration metric: took 8.885375ms to run NodePressure ...
	I1210 08:40:36.332072  550743 start.go:242] waiting for startup goroutines ...
	I1210 08:40:36.332094  550743 start.go:247] waiting for cluster config update ...
	I1210 08:40:36.332126  550743 start.go:256] writing updated cluster config ...
	I1210 08:40:36.332479  550743 ssh_runner.go:195] Run: rm -f paused
	I1210 08:40:36.337822  550743 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 08:40:36.338344  550743 kapi.go:59] client config for pause-767596: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/pause-767596/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/pause-767596/client.key", CAFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 08:40:36.407839  550743 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7r54s" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:40:38.415041  550743 pod_ready.go:94] pod "coredns-66bc5c9577-7r54s" is "Ready"
	I1210 08:40:38.415070  550743 pod_ready.go:86] duration metric: took 2.007204238s for pod "coredns-66bc5c9577-7r54s" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:40:38.419883  550743 pod_ready.go:83] waiting for pod "etcd-pause-767596" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 08:40:40.426849  550743 pod_ready.go:104] pod "etcd-pause-767596" is not "Ready", error: <nil>
	W1210 08:40:42.949567  550743 pod_ready.go:104] pod "etcd-pause-767596" is not "Ready", error: <nil>
	I1210 08:40:43.427007  550743 pod_ready.go:94] pod "etcd-pause-767596" is "Ready"
	I1210 08:40:43.427044  550743 pod_ready.go:86] duration metric: took 5.007131635s for pod "etcd-pause-767596" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:40:43.429937  550743 pod_ready.go:83] waiting for pod "kube-apiserver-pause-767596" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:40:43.436011  550743 pod_ready.go:94] pod "kube-apiserver-pause-767596" is "Ready"
	I1210 08:40:43.436046  550743 pod_ready.go:86] duration metric: took 6.087964ms for pod "kube-apiserver-pause-767596" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:40:43.439451  550743 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-767596" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:40:43.445121  550743 pod_ready.go:94] pod "kube-controller-manager-pause-767596" is "Ready"
	I1210 08:40:43.445145  550743 pod_ready.go:86] duration metric: took 5.664485ms for pod "kube-controller-manager-pause-767596" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:40:43.448754  550743 pod_ready.go:83] waiting for pod "kube-proxy-p4g2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:40:43.625336  550743 pod_ready.go:94] pod "kube-proxy-p4g2s" is "Ready"
	I1210 08:40:43.625382  550743 pod_ready.go:86] duration metric: took 176.567982ms for pod "kube-proxy-p4g2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:40:43.825217  550743 pod_ready.go:83] waiting for pod "kube-scheduler-pause-767596" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:40:44.224944  550743 pod_ready.go:94] pod "kube-scheduler-pause-767596" is "Ready"
	I1210 08:40:44.224983  550743 pod_ready.go:86] duration metric: took 399.736317ms for pod "kube-scheduler-pause-767596" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:40:44.224997  550743 pod_ready.go:40] duration metric: took 7.887141637s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 08:40:44.309595  550743 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1210 08:40:44.315354  550743 out.go:179] * Done! kubectl is now configured to use "pause-767596" cluster and "default" namespace by default
	I1210 08:40:41.539329  559515 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.308951206s)
	I1210 08:40:41.539353  559515 crio.go:469] duration metric: took 2.309048692s to extract the tarball
	I1210 08:40:41.539360  559515 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 08:40:41.680093  559515 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 08:40:41.716512  559515 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 08:40:41.716537  559515 cache_images.go:86] Images are preloaded, skipping loading
	I1210 08:40:41.716551  559515 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1210 08:40:41.716645  559515 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-470056 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-470056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 08:40:41.716748  559515 ssh_runner.go:195] Run: crio config
	I1210 08:40:41.794006  559515 cni.go:84] Creating CNI manager for ""
	I1210 08:40:41.794030  559515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 08:40:41.794054  559515 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 08:40:41.794078  559515 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-470056 NodeName:kubernetes-upgrade-470056 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 08:40:41.794211  559515 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-470056"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 08:40:41.794290  559515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 08:40:41.803239  559515 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 08:40:41.803363  559515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 08:40:41.810785  559515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1210 08:40:41.824511  559515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 08:40:41.845081  559515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1210 08:40:41.871774  559515 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 08:40:41.875920  559515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 08:40:41.888908  559515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 08:40:42.014653  559515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 08:40:42.034732  559515 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056 for IP: 192.168.85.2
	I1210 08:40:42.034805  559515 certs.go:195] generating shared ca certs ...
	I1210 08:40:42.034838  559515 certs.go:227] acquiring lock for ca certs: {Name:mk8f0c12407c084bd20b81428ac0dabca9a8cbd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:40:42.035067  559515 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key
	I1210 08:40:42.035169  559515 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key
	I1210 08:40:42.035207  559515 certs.go:257] generating profile certs ...
	I1210 08:40:42.035342  559515 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/client.key
	I1210 08:40:42.035478  559515 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/apiserver.key.45c47546
	I1210 08:40:42.035578  559515 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/proxy-client.key
	I1210 08:40:42.035748  559515 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem (1338 bytes)
	W1210 08:40:42.035825  559515 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528_empty.pem, impossibly tiny 0 bytes
	I1210 08:40:42.035863  559515 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 08:40:42.035926  559515 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem (1082 bytes)
	I1210 08:40:42.035986  559515 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem (1123 bytes)
	I1210 08:40:42.036043  559515 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem (1675 bytes)
	I1210 08:40:42.036135  559515 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 08:40:42.036922  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 08:40:42.079780  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 08:40:42.128421  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 08:40:42.179894  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 08:40:42.203313  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1210 08:40:42.226109  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 08:40:42.248667  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 08:40:42.271268  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 08:40:42.292926  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem --> /usr/share/ca-certificates/378528.pem (1338 bytes)
	I1210 08:40:42.313701  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /usr/share/ca-certificates/3785282.pem (1708 bytes)
	I1210 08:40:42.334504  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 08:40:42.354555  559515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 08:40:42.368724  559515 ssh_runner.go:195] Run: openssl version
	I1210 08:40:42.377006  559515 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3785282.pem
	I1210 08:40:42.385181  559515 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3785282.pem /etc/ssl/certs/3785282.pem
	I1210 08:40:42.394792  559515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3785282.pem
	I1210 08:40:42.398655  559515 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 07:35 /usr/share/ca-certificates/3785282.pem
	I1210 08:40:42.398722  559515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3785282.pem
	I1210 08:40:42.439750  559515 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 08:40:42.447349  559515 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:40:42.454987  559515 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 08:40:42.464323  559515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:40:42.468227  559515 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 07:25 /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:40:42.468337  559515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:40:42.511301  559515 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 08:40:42.519128  559515 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/378528.pem
	I1210 08:40:42.527862  559515 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/378528.pem /etc/ssl/certs/378528.pem
	I1210 08:40:42.535897  559515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378528.pem
	I1210 08:40:42.539729  559515 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 07:35 /usr/share/ca-certificates/378528.pem
	I1210 08:40:42.539837  559515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378528.pem
	I1210 08:40:42.580647  559515 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 08:40:42.588135  559515 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 08:40:42.591981  559515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 08:40:42.634050  559515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 08:40:42.676821  559515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 08:40:42.719106  559515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 08:40:42.761009  559515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 08:40:42.802912  559515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 08:40:42.845199  559515 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-470056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-470056 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 08:40:42.845325  559515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 08:40:42.845407  559515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 08:40:42.919731  559515 cri.go:89] found id: ""
	I1210 08:40:42.919869  559515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 08:40:42.939688  559515 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 08:40:42.939759  559515 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 08:40:42.939836  559515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 08:40:42.951898  559515 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 08:40:42.952491  559515 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-470056" does not appear in /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 08:40:42.952776  559515 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-376671/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-470056" cluster setting kubeconfig missing "kubernetes-upgrade-470056" context setting]
	I1210 08:40:42.953270  559515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/kubeconfig: {Name:mk62f1b4a63164643ed31a5211abdadfc49db863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:40:42.953988  559515 kapi.go:59] client config for kubernetes-upgrade-470056: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/client.key", CAFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 08:40:42.954742  559515 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 08:40:42.954787  559515 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 08:40:42.954808  559515 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 08:40:42.954828  559515 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 08:40:42.954858  559515 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 08:40:42.955197  559515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 08:40:42.969776  559515 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 08:40:09.943357327 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 08:40:41.863683282 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.85.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-470056"
	   kubeletExtraArgs:
	-    node-ip: 192.168.85.2
	+    - name: "node-ip"
	+      value: "192.168.85.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1210 08:40:42.969838  559515 kubeadm.go:1161] stopping kube-system containers ...
	I1210 08:40:42.969864  559515 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 08:40:42.969937  559515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 08:40:43.022682  559515 cri.go:89] found id: ""
	I1210 08:40:43.022839  559515 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 08:40:43.042594  559515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 08:40:43.051165  559515 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Dec 10 08:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 10 08:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 10 08:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 10 08:40 /etc/kubernetes/scheduler.conf
	
	I1210 08:40:43.051260  559515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 08:40:43.060495  559515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 08:40:43.070827  559515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 08:40:43.079515  559515 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 08:40:43.079584  559515 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 08:40:43.087808  559515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 08:40:43.096558  559515 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 08:40:43.096678  559515 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 08:40:43.104568  559515 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 08:40:43.116430  559515 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 08:40:43.166741  559515 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 08:40:44.689612  559515 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.52278604s)
	I1210 08:40:44.689688  559515 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 08:40:44.978902  559515 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	
	
	==> CRI-O <==
	Dec 10 08:40:27 pause-767596 crio[2144]: time="2025-12-10T08:40:27.939293031Z" level=info msg="Started container" PID=2483 containerID=d6c7b4fc3031146f4408f3b9fc1855ea7b8c17881d51899d60b47dc3ac1095ad description=kube-system/kube-scheduler-pause-767596/kube-scheduler id=f9cc9a1f-e031-434b-ad19-6e575ebd181d name=/runtime.v1.RuntimeService/StartContainer sandboxID=8b6c36ecf49986e23063f8c103db8c6814c9c467a7e907e2834335b15df918bc
	Dec 10 08:40:27 pause-767596 crio[2144]: time="2025-12-10T08:40:27.942821502Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 08:40:27 pause-767596 crio[2144]: time="2025-12-10T08:40:27.957860613Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 08:40:27 pause-767596 crio[2144]: time="2025-12-10T08:40:27.965599351Z" level=info msg="Started container" PID=2484 containerID=195fe55f5ac4a49a5f8ab6ad8e4d47c8eda98becee1742359aeff6075452e689 description=kube-system/etcd-pause-767596/etcd id=95a92f52-377a-4431-87e9-eec688bef20e name=/runtime.v1.RuntimeService/StartContainer sandboxID=46f5a3c85aab673bd2daf37e5551a3c4b0ee089425e162224498414a9bb30f90
	Dec 10 08:40:27 pause-767596 crio[2144]: time="2025-12-10T08:40:27.996975425Z" level=info msg="Created container 0cbf424c207d46886a3438d2ac328e0162fc287f9259049da31f809885fae9ca: kube-system/kube-apiserver-pause-767596/kube-apiserver" id=925cbd64-0622-4b32-bf91-f6e8dfe8d145 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 08:40:27 pause-767596 crio[2144]: time="2025-12-10T08:40:27.997985839Z" level=info msg="Starting container: 0cbf424c207d46886a3438d2ac328e0162fc287f9259049da31f809885fae9ca" id=f7d5d787-74ec-4aef-9d13-795b48519abf name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 08:40:28 pause-767596 crio[2144]: time="2025-12-10T08:40:28.002255791Z" level=info msg="Started container" PID=2513 containerID=0cbf424c207d46886a3438d2ac328e0162fc287f9259049da31f809885fae9ca description=kube-system/kube-apiserver-pause-767596/kube-apiserver id=f7d5d787-74ec-4aef-9d13-795b48519abf name=/runtime.v1.RuntimeService/StartContainer sandboxID=66c53b93e01ff43370b2463b5056791d03dc6e17cde52b70f3ef0e7d5c07020e
	Dec 10 08:40:28 pause-767596 crio[2144]: time="2025-12-10T08:40:28.304356312Z" level=info msg="Created container 23d088000dc21df1eea635b974d19360a5e49f3f60043678d31c51ca0026677f: kube-system/kube-proxy-p4g2s/kube-proxy" id=80dc2755-f8e5-4410-ae87-8348b670a07e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 08:40:28 pause-767596 crio[2144]: time="2025-12-10T08:40:28.305033808Z" level=info msg="Starting container: 23d088000dc21df1eea635b974d19360a5e49f3f60043678d31c51ca0026677f" id=c8294f6d-e153-483b-a747-a7ebcc411dc5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 08:40:28 pause-767596 crio[2144]: time="2025-12-10T08:40:28.307925932Z" level=info msg="Started container" PID=2512 containerID=23d088000dc21df1eea635b974d19360a5e49f3f60043678d31c51ca0026677f description=kube-system/kube-proxy-p4g2s/kube-proxy id=c8294f6d-e153-483b-a747-a7ebcc411dc5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=11eb6a592222a33bac549e35894003b03b6ab247b291c71ec72b605201c2b7a1
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.532741716Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.543123386Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.543159021Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.543176391Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.546877278Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.547063578Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.547141806Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.551159234Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.551196814Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.551219468Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.554429799Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.554465032Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.55448785Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.558483838Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.558522821Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	0cbf424c207d4       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   19 seconds ago       Running             kube-apiserver            1                   66c53b93e01ff       kube-apiserver-pause-767596            kube-system
	23d088000dc21       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   19 seconds ago       Running             kube-proxy                1                   11eb6a592222a       kube-proxy-p4g2s                       kube-system
	195fe55f5ac4a       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   19 seconds ago       Running             etcd                      1                   46f5a3c85aab6       etcd-pause-767596                      kube-system
	d6c7b4fc30311       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   19 seconds ago       Running             kube-scheduler            1                   8b6c36ecf4998       kube-scheduler-pause-767596            kube-system
	d7674cb30a043       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   19 seconds ago       Running             kube-controller-manager   1                   640bc27225613       kube-controller-manager-pause-767596   kube-system
	3041c7738207a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   2                   d49ccd7fa1884       coredns-66bc5c9577-7r54s               kube-system
	4d1d5992cc4a7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               2                   ca68b9d4dabd6       kindnet-kx2vt                          kube-system
	a6d2f0e2f3d9d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   About a minute ago   Created             coredns                   1                   d49ccd7fa1884       coredns-66bc5c9577-7r54s               kube-system
	794b94a9f483a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Created             kindnet-cni               1                   ca68b9d4dabd6       kindnet-kx2vt                          kube-system
	2e2fa03d36412       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   2 minutes ago        Exited              coredns                   0                   d49ccd7fa1884       coredns-66bc5c9577-7r54s               kube-system
	022741f8c7a58       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   2 minutes ago        Exited              kube-proxy                0                   11eb6a592222a       kube-proxy-p4g2s                       kube-system
	b9c23cab83e39       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 minutes ago        Exited              kindnet-cni               0                   ca68b9d4dabd6       kindnet-kx2vt                          kube-system
	c1c1db525792c       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   3 minutes ago        Exited              etcd                      0                   46f5a3c85aab6       etcd-pause-767596                      kube-system
	d799933dbf1f7       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   3 minutes ago        Exited              kube-apiserver            0                   66c53b93e01ff       kube-apiserver-pause-767596            kube-system
	81209469877d5       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   3 minutes ago        Exited              kube-scheduler            0                   8b6c36ecf4998       kube-scheduler-pause-767596            kube-system
	f04f090d9d5b7       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   3 minutes ago        Exited              kube-controller-manager   0                   640bc27225613       kube-controller-manager-pause-767596   kube-system
	
	
	==> coredns [2e2fa03d36412a2c91c2f76d767f6e2e9e591aa5b021bfb00f6f513ec0bfd307] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58816 - 33210 "HINFO IN 8469837959569984790.7529404179972013098. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.041734054s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3041c7738207a5c5b2d647d810aafb92c70d64ab842ac4e8145635706f896e61] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46857 - 10465 "HINFO IN 6515730640656355756.8113294786332208993. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015096539s
	
	
	==> coredns [a6d2f0e2f3d9d16f6e4e55ca9f54961f4e0d50dcc4d20da2b0505e4622433a37] <==
	
	
	==> describe nodes <==
	Name:               pause-767596
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-767596
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=pause-767596
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T08_37_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 08:37:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-767596
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 08:40:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 08:38:37 +0000   Wed, 10 Dec 2025 08:37:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 08:38:37 +0000   Wed, 10 Dec 2025 08:37:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 08:38:37 +0000   Wed, 10 Dec 2025 08:37:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 08:38:37 +0000   Wed, 10 Dec 2025 08:38:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-767596
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 0bfdf75342fda7ce4dcc05536938a4f8
	  System UUID:                fd128a2e-8483-4dc7-b52d-1cda387dfee2
	  Boot ID:                    9ae06026-ffc7-4eb4-912b-d54adcad0f66
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-7r54s                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m51s
	  kube-system                 etcd-pause-767596                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m57s
	  kube-system                 kindnet-kx2vt                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m51s
	  kube-system                 kube-apiserver-pause-767596             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m57s
	  kube-system                 kube-controller-manager-pause-767596    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m57s
	  kube-system                 kube-proxy-p4g2s                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m51s
	  kube-system                 kube-scheduler-pause-767596             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 2m50s                kube-proxy       
	  Normal   Starting                 11s                  kube-proxy       
	  Warning  CgroupV1                 3m9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  3m9s (x8 over 3m9s)  kubelet          Node pause-767596 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m9s (x8 over 3m9s)  kubelet          Node pause-767596 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m9s (x8 over 3m9s)  kubelet          Node pause-767596 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m57s                kubelet          Node pause-767596 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m57s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m57s                kubelet          Node pause-767596 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m57s                kubelet          Node pause-767596 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m57s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m52s                node-controller  Node pause-767596 event: Registered Node pause-767596 in Controller
	  Normal   NodeReady                2m10s                kubelet          Node pause-767596 status is now: NodeReady
	  Warning  ContainerGCFailed        57s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             22s (x7 over 84s)    kubelet          Node pause-767596 status is now: NodeNotReady
	  Normal   RegisteredNode           10s                  node-controller  Node pause-767596 event: Registered Node pause-767596 in Controller
	
	
	==> dmesg <==
	[  +3.809013] overlayfs: idmapped layers are currently not supported
	[Dec10 08:14] overlayfs: idmapped layers are currently not supported
	[ +34.529466] overlayfs: idmapped layers are currently not supported
	[Dec10 08:15] overlayfs: idmapped layers are currently not supported
	[  +3.847763] overlayfs: idmapped layers are currently not supported
	[Dec10 08:17] overlayfs: idmapped layers are currently not supported
	[Dec10 08:18] overlayfs: idmapped layers are currently not supported
	[Dec10 08:20] overlayfs: idmapped layers are currently not supported
	[Dec10 08:24] overlayfs: idmapped layers are currently not supported
	[Dec10 08:25] overlayfs: idmapped layers are currently not supported
	[Dec10 08:26] overlayfs: idmapped layers are currently not supported
	[Dec10 08:27] overlayfs: idmapped layers are currently not supported
	[Dec10 08:28] overlayfs: idmapped layers are currently not supported
	[Dec10 08:30] overlayfs: idmapped layers are currently not supported
	[ +17.507086] overlayfs: idmapped layers are currently not supported
	[Dec10 08:31] overlayfs: idmapped layers are currently not supported
	[ +48.274286] overlayfs: idmapped layers are currently not supported
	[Dec10 08:32] overlayfs: idmapped layers are currently not supported
	[ +48.206918] overlayfs: idmapped layers are currently not supported
	[Dec10 08:33] overlayfs: idmapped layers are currently not supported
	[Dec10 08:34] overlayfs: idmapped layers are currently not supported
	[Dec10 08:35] overlayfs: idmapped layers are currently not supported
	[Dec10 08:37] overlayfs: idmapped layers are currently not supported
	[  +1.400670] overlayfs: idmapped layers are currently not supported
	[Dec10 08:40] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [195fe55f5ac4a49a5f8ab6ad8e4d47c8eda98becee1742359aeff6075452e689] <==
	{"level":"warn","ts":"2025-12-10T08:40:32.572044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.606857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.636293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.676994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.717059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.805528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.838610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.855479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.901389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.916815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.954352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.975348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.991722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.034153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.059993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.070494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.082695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.105372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.138837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.159410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.200263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.233848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.268352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.287087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.344876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41712","server-name":"","error":"EOF"}
	
	
	==> etcd [c1c1db525792cee93534a1a2d8bc9e289bb8e44a3c18c1a2962887ee83192854] <==
	{"level":"warn","ts":"2025-12-10T08:37:45.289358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:37:45.326521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:37:45.348167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:37:45.405790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:37:45.445716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:37:45.489219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:37:45.648462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35208","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-10T08:38:43.032843Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-10T08:38:43.032898Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-767596","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-12-10T08:38:43.033023Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T08:38:43.037765Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T08:38:44.069384Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-10T08:38:44.069617Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T08:38:44.069795Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T08:38:44.069843Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-10T08:38:44.069692Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T08:38:44.069923Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T08:38:44.069973Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T08:38:44.069707Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-12-10T08:38:44.070068Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-10T08:38:44.070102Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-10T08:38:44.078624Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-12-10T08:38:44.078785Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T08:38:44.078858Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-10T08:38:44.078898Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-767596","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 08:40:47 up  3:23,  0 user,  load average: 3.11, 2.35, 1.88
	Linux pause-767596 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4d1d5992cc4a7b009e9355692324b93c4073f0c07db670f0d6fb5415eb94e688] <==
	I1210 08:40:23.304467       1 main.go:148] setting mtu 1500 for CNI 
	I1210 08:40:23.304479       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 08:40:23.304492       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T08:40:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 08:40:23.532747       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 08:40:23.533063       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 08:40:23.533119       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 08:40:23.534075       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1210 08:40:23.534358       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1210 08:40:23.537567       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1210 08:40:23.537734       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1210 08:40:23.537882       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1210 08:40:24.511110       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1210 08:40:24.598240       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1210 08:40:24.935954       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1210 08:40:24.966764       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1210 08:40:26.155280       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1210 08:40:26.659535       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1210 08:40:26.674234       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1210 08:40:27.114735       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1210 08:40:34.735308       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 08:40:34.735418       1 metrics.go:72] Registering metrics
	I1210 08:40:34.735541       1 controller.go:711] "Syncing nftables rules"
	I1210 08:40:43.532335       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 08:40:43.532439       1 main.go:301] handling current node
	
	
	==> kindnet [794b94a9f483ad2f1228b1f4bcff462c3ed700eb0451dc2e54464daa7fb66455] <==
	
	
	==> kindnet [b9c23cab83e3933a35721d41b4a1c9d16c345dc93c9e4f506985082ad0b659b0] <==
	I1210 08:37:56.904023       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 08:37:56.996317       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1210 08:37:56.996464       1 main.go:148] setting mtu 1500 for CNI 
	I1210 08:37:56.996477       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 08:37:56.996491       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T08:37:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 08:37:57.198735       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 08:37:57.201196       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 08:37:57.201254       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 08:37:57.201773       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1210 08:38:27.199572       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1210 08:38:27.202044       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1210 08:38:27.202227       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1210 08:38:27.203457       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1210 08:38:28.601692       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 08:38:28.601793       1 metrics.go:72] Registering metrics
	I1210 08:38:28.601878       1 controller.go:711] "Syncing nftables rules"
	I1210 08:38:37.198532       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 08:38:37.198659       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0cbf424c207d46886a3438d2ac328e0162fc287f9259049da31f809885fae9ca] <==
	I1210 08:40:34.631168       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1210 08:40:34.631229       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 08:40:34.634686       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1210 08:40:34.634702       1 policy_source.go:240] refreshing policies
	I1210 08:40:34.656353       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 08:40:34.658530       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1210 08:40:34.669587       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1210 08:40:34.669676       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1210 08:40:34.669957       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1210 08:40:34.670077       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 08:40:34.670999       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 08:40:34.675161       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1210 08:40:34.675529       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1210 08:40:34.675652       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1210 08:40:34.676334       1 aggregator.go:171] initial CRD sync complete...
	I1210 08:40:34.676387       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 08:40:34.676417       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 08:40:34.676444       1 cache.go:39] Caches are synced for autoregister controller
	E1210 08:40:34.747351       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 08:40:35.185902       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 08:40:36.512781       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 08:40:37.965731       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 08:40:38.133362       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 08:40:38.182287       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 08:40:38.259165       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [d799933dbf1f73062115abbf6e2ba939f1de6386c5fd7cc217bae3a3989ffcf5] <==
	W1210 08:38:43.058121       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058170       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058219       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058270       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058322       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058378       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058430       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058482       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058533       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058586       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058639       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058690       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058750       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058803       1 logging.go:55] [core] [Channel #17 SubChannel #21]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058854       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058909       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.060136       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.060199       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.060249       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.060299       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.060348       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.060394       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.060445       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.060493       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.060794       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [d7674cb30a04349c5eac365914ffd12f57c54b30f34067fb8aaca29e1c1f8db6] <==
	I1210 08:40:37.906458       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 08:40:37.906503       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 08:40:37.915004       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 08:40:37.915180       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 08:40:37.916709       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 08:40:37.917086       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1210 08:40:37.925596       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 08:40:37.925638       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1210 08:40:37.925761       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1210 08:40:37.925838       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1210 08:40:37.925927       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 08:40:37.925956       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1210 08:40:37.931117       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 08:40:37.931217       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 08:40:37.931984       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 08:40:37.945501       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 08:40:37.946148       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 08:40:37.951388       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1210 08:40:37.951515       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1210 08:40:37.951601       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-767596"
	I1210 08:40:37.951737       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1210 08:40:37.960181       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 08:40:37.997676       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 08:40:37.997726       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 08:40:37.997738       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [f04f090d9d5b769dca360fb021e4a3baab548c08d62a34df27003779c331b1d6] <==
	I1210 08:37:55.054147       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1210 08:37:55.054153       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1210 08:37:55.054670       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 08:37:55.054702       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 08:37:55.054755       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1210 08:37:55.055207       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1210 08:37:55.057209       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 08:37:55.057294       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 08:37:55.057612       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1210 08:37:55.057813       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 08:37:55.058310       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1210 08:37:55.077878       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-767596" podCIDRs=["10.244.0.0/24"]
	I1210 08:37:55.090029       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 08:37:55.090131       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 08:37:55.090161       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 08:37:55.092621       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1210 08:37:55.092802       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1210 08:37:55.093233       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-767596"
	I1210 08:37:55.094781       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1210 08:37:55.094899       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1210 08:37:55.095363       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1210 08:37:55.095426       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 08:37:55.096329       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 08:37:55.096411       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 08:38:40.103143       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [022741f8c7a581c5c492c4ae38c1775fb76122b7b80fa353571b73866aeabdcb] <==
	I1210 08:37:56.920456       1 server_linux.go:53] "Using iptables proxy"
	I1210 08:37:57.024650       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 08:37:57.130446       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 08:37:57.130486       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1210 08:37:57.130563       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 08:37:57.239706       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 08:37:57.239768       1 server_linux.go:132] "Using iptables Proxier"
	I1210 08:37:57.244050       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 08:37:57.244372       1 server.go:527] "Version info" version="v1.34.2"
	I1210 08:37:57.244396       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 08:37:57.245912       1 config.go:200] "Starting service config controller"
	I1210 08:37:57.245936       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 08:37:57.245955       1 config.go:106] "Starting endpoint slice config controller"
	I1210 08:37:57.245959       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 08:37:57.245979       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 08:37:57.245984       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 08:37:57.246679       1 config.go:309] "Starting node config controller"
	I1210 08:37:57.246700       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 08:37:57.246707       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 08:37:57.346772       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 08:37:57.346966       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 08:37:57.347235       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [23d088000dc21df1eea635b974d19360a5e49f3f60043678d31c51ca0026677f] <==
	I1210 08:40:35.306584       1 server_linux.go:53] "Using iptables proxy"
	I1210 08:40:35.667232       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 08:40:35.903097       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 08:40:35.903136       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1210 08:40:35.903201       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 08:40:36.126495       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 08:40:36.126556       1 server_linux.go:132] "Using iptables Proxier"
	I1210 08:40:36.133220       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 08:40:36.133514       1 server.go:527] "Version info" version="v1.34.2"
	I1210 08:40:36.133530       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 08:40:36.134951       1 config.go:200] "Starting service config controller"
	I1210 08:40:36.134964       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 08:40:36.151366       1 config.go:106] "Starting endpoint slice config controller"
	I1210 08:40:36.151396       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 08:40:36.151425       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 08:40:36.151429       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 08:40:36.152121       1 config.go:309] "Starting node config controller"
	I1210 08:40:36.152128       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 08:40:36.152134       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 08:40:36.238902       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 08:40:36.251557       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 08:40:36.251673       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [81209469877d513d99fb101d1a10c2ce3527b803a1a2632345426450f209d461] <==
	I1210 08:37:48.043502       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1210 08:37:48.069814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 08:37:48.069984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 08:37:48.070483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 08:37:48.078358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 08:37:48.079050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 08:37:48.079285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 08:37:48.084361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 08:37:48.084994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 08:37:48.085413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 08:37:48.086815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 08:37:48.087490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 08:37:48.090642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 08:37:48.090925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 08:37:48.091105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 08:37:48.091211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 08:37:48.091450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 08:37:48.091570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 08:37:48.091717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1210 08:37:49.637310       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 08:38:43.031399       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1210 08:38:43.031422       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1210 08:38:43.031455       1 server.go:265] "[graceful-termination] secure server is exiting"
	I1210 08:38:43.031456       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1210 08:38:43.031469       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d6c7b4fc3031146f4408f3b9fc1855ea7b8c17881d51899d60b47dc3ac1095ad] <==
	I1210 08:40:32.750709       1 serving.go:386] Generated self-signed cert in-memory
	I1210 08:40:36.797598       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1210 08:40:36.797631       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 08:40:36.815541       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 08:40:36.815648       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1210 08:40:36.815675       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1210 08:40:36.815707       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 08:40:36.824921       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 08:40:36.824958       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 08:40:36.824977       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 08:40:36.824984       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 08:40:36.916716       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1210 08:40:36.925085       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 08:40:36.925129       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 08:40:27 pause-767596 kubelet[1328]: E1210 08:40:27.869006    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p4g2s\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6a1da39c-5fd9-47dc-8874-9dc207729443" pod="kube-system/kube-proxy-p4g2s"
	Dec 10 08:40:27 pause-767596 kubelet[1328]: E1210 08:40:27.869408    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-7r54s\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="621f03c5-af3a-4a79-9ee4-b28c576f8a3a" pod="kube-system/coredns-66bc5c9577-7r54s"
	Dec 10 08:40:27 pause-767596 kubelet[1328]: E1210 08:40:27.869789    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-767596\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b938f46528c3209faec4a21e53aef9ab" pod="kube-system/kube-apiserver-pause-767596"
	Dec 10 08:40:27 pause-767596 kubelet[1328]: E1210 08:40:27.871216    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-767596\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="9207bccb5f86e0457a4555234e6ce912" pod="kube-system/kube-controller-manager-pause-767596"
	Dec 10 08:40:27 pause-767596 kubelet[1328]: E1210 08:40:27.871623    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-767596\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="44ca7fad4b77c4449373efd3a5c7f1c5" pod="kube-system/kube-scheduler-pause-767596"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.071930    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-767596\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b938f46528c3209faec4a21e53aef9ab" pod="kube-system/kube-apiserver-pause-767596"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.073125    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-767596\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="9207bccb5f86e0457a4555234e6ce912" pod="kube-system/kube-controller-manager-pause-767596"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.073538    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-767596\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="44ca7fad4b77c4449373efd3a5c7f1c5" pod="kube-system/kube-scheduler-pause-767596"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.074168    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-767596\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="9b51ec6fb7597bdd245e6184e27911f1" pod="kube-system/etcd-pause-767596"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.074387    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-kx2vt\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="475678b0-6300-4037-8f4c-dbc6a7b12cb7" pod="kube-system/kindnet-kx2vt"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.074537    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p4g2s\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6a1da39c-5fd9-47dc-8874-9dc207729443" pod="kube-system/kube-proxy-p4g2s"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.074693    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-7r54s\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="621f03c5-af3a-4a79-9ee4-b28c576f8a3a" pod="kube-system/coredns-66bc5c9577-7r54s"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.077236    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-767596\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="9207bccb5f86e0457a4555234e6ce912" pod="kube-system/kube-controller-manager-pause-767596"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.077773    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-767596\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="44ca7fad4b77c4449373efd3a5c7f1c5" pod="kube-system/kube-scheduler-pause-767596"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.078141    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-767596\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="9b51ec6fb7597bdd245e6184e27911f1" pod="kube-system/etcd-pause-767596"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.078494    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-kx2vt\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="475678b0-6300-4037-8f4c-dbc6a7b12cb7" pod="kube-system/kindnet-kx2vt"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.078861    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p4g2s\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6a1da39c-5fd9-47dc-8874-9dc207729443" pod="kube-system/kube-proxy-p4g2s"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.079589    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-7r54s\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="621f03c5-af3a-4a79-9ee4-b28c576f8a3a" pod="kube-system/coredns-66bc5c9577-7r54s"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.080190    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-767596\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b938f46528c3209faec4a21e53aef9ab" pod="kube-system/kube-apiserver-pause-767596"
	Dec 10 08:40:34 pause-767596 kubelet[1328]: E1210 08:40:34.383960    1328 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-767596\" is forbidden: User \"system:node:pause-767596\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-767596' and this object" podUID="44ca7fad4b77c4449373efd3a5c7f1c5" pod="kube-system/kube-scheduler-pause-767596"
	Dec 10 08:40:34 pause-767596 kubelet[1328]: E1210 08:40:34.487211    1328 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-767596\" is forbidden: User \"system:node:pause-767596\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-767596' and this object" podUID="9b51ec6fb7597bdd245e6184e27911f1" pod="kube-system/etcd-pause-767596"
	Dec 10 08:40:34 pause-767596 kubelet[1328]: E1210 08:40:34.562579    1328 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-kx2vt\" is forbidden: User \"system:node:pause-767596\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-767596' and this object" podUID="475678b0-6300-4037-8f4c-dbc6a7b12cb7" pod="kube-system/kindnet-kx2vt"
	Dec 10 08:40:44 pause-767596 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 08:40:44 pause-767596 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 08:40:44 pause-767596 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-767596 -n pause-767596
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-767596 -n pause-767596: exit status 2 (371.103475ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-767596 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-767596
helpers_test.go:244: (dbg) docker inspect pause-767596:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8db1a7a9ae811250f9311f7c797b92bccee36fcc2c60f1cc23fc89d9107b9e32",
	        "Created": "2025-12-10T08:37:19.219959796Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 542571,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-10T08:37:19.306831014Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1ff29cae50248a2025de5c362d2162552d5bd4f884571d3031e013b6e82ef1d9",
	        "ResolvConfPath": "/var/lib/docker/containers/8db1a7a9ae811250f9311f7c797b92bccee36fcc2c60f1cc23fc89d9107b9e32/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8db1a7a9ae811250f9311f7c797b92bccee36fcc2c60f1cc23fc89d9107b9e32/hostname",
	        "HostsPath": "/var/lib/docker/containers/8db1a7a9ae811250f9311f7c797b92bccee36fcc2c60f1cc23fc89d9107b9e32/hosts",
	        "LogPath": "/var/lib/docker/containers/8db1a7a9ae811250f9311f7c797b92bccee36fcc2c60f1cc23fc89d9107b9e32/8db1a7a9ae811250f9311f7c797b92bccee36fcc2c60f1cc23fc89d9107b9e32-json.log",
	        "Name": "/pause-767596",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-767596:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-767596",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8db1a7a9ae811250f9311f7c797b92bccee36fcc2c60f1cc23fc89d9107b9e32",
	                "LowerDir": "/var/lib/docker/overlay2/883271cb72aed4d4c78b44c720149d5c8f8b74d226e433cbc5b02384bf5bcedd-init/diff:/var/lib/docker/overlay2/888a54fd4518421bb2e30bc2f0825f232bffa2f260991e8ba288270662a6554b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/883271cb72aed4d4c78b44c720149d5c8f8b74d226e433cbc5b02384bf5bcedd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/883271cb72aed4d4c78b44c720149d5c8f8b74d226e433cbc5b02384bf5bcedd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/883271cb72aed4d4c78b44c720149d5c8f8b74d226e433cbc5b02384bf5bcedd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-767596",
	                "Source": "/var/lib/docker/volumes/pause-767596/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-767596",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-767596",
	                "name.minikube.sigs.k8s.io": "pause-767596",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "97766222e69a3d0e70fa10b2de4407da832a1725dbe20f22a0e98ae150791032",
	            "SandboxKey": "/var/run/docker/netns/97766222e69a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33358"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33359"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33362"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33360"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33361"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-767596": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:da:52:90:93:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e31929f798a32ed6a503ac5696c09fd3543da3dc8796223a5f988246910e7569",
	                    "EndpointID": "b33a2843877ebe4d47b18461d7dee1f84c50fafa2d163db899373dac98e1d9c0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-767596",
	                        "8db1a7a9ae81"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-767596 -n pause-767596
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-767596 -n pause-767596: exit status 2 (353.783503ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-767596 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-767596 logs -n 25: (1.497461582s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-500883 --schedule 15s -v=5 --alsologtostderr                                                                                  │ scheduled-stop-500883       │ jenkins │ v1.37.0 │ 10 Dec 25 08:36 UTC │                     │
	│ stop    │ -p scheduled-stop-500883 --schedule 15s -v=5 --alsologtostderr                                                                                  │ scheduled-stop-500883       │ jenkins │ v1.37.0 │ 10 Dec 25 08:36 UTC │                     │
	│ stop    │ -p scheduled-stop-500883 --schedule 15s -v=5 --alsologtostderr                                                                                  │ scheduled-stop-500883       │ jenkins │ v1.37.0 │ 10 Dec 25 08:36 UTC │ 10 Dec 25 08:36 UTC │
	│ delete  │ -p scheduled-stop-500883                                                                                                                        │ scheduled-stop-500883       │ jenkins │ v1.37.0 │ 10 Dec 25 08:36 UTC │ 10 Dec 25 08:36 UTC │
	│ start   │ -p insufficient-storage-055567 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                                │ insufficient-storage-055567 │ jenkins │ v1.37.0 │ 10 Dec 25 08:36 UTC │                     │
	│ delete  │ -p insufficient-storage-055567                                                                                                                  │ insufficient-storage-055567 │ jenkins │ v1.37.0 │ 10 Dec 25 08:37 UTC │ 10 Dec 25 08:37 UTC │
	│ start   │ -p NoKubernetes-783391 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                   │ NoKubernetes-783391         │ jenkins │ v1.37.0 │ 10 Dec 25 08:37 UTC │                     │
	│ start   │ -p pause-767596 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-767596                │ jenkins │ v1.37.0 │ 10 Dec 25 08:37 UTC │ 10 Dec 25 08:38 UTC │
	│ start   │ -p NoKubernetes-783391 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-783391         │ jenkins │ v1.37.0 │ 10 Dec 25 08:37 UTC │ 10 Dec 25 08:37 UTC │
	│ start   │ -p NoKubernetes-783391 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-783391         │ jenkins │ v1.37.0 │ 10 Dec 25 08:37 UTC │ 10 Dec 25 08:37 UTC │
	│ delete  │ -p NoKubernetes-783391                                                                                                                          │ NoKubernetes-783391         │ jenkins │ v1.37.0 │ 10 Dec 25 08:37 UTC │ 10 Dec 25 08:37 UTC │
	│ start   │ -p NoKubernetes-783391 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-783391         │ jenkins │ v1.37.0 │ 10 Dec 25 08:37 UTC │ 10 Dec 25 08:38 UTC │
	│ ssh     │ -p NoKubernetes-783391 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-783391         │ jenkins │ v1.37.0 │ 10 Dec 25 08:38 UTC │                     │
	│ stop    │ -p NoKubernetes-783391                                                                                                                          │ NoKubernetes-783391         │ jenkins │ v1.37.0 │ 10 Dec 25 08:38 UTC │ 10 Dec 25 08:38 UTC │
	│ start   │ -p NoKubernetes-783391 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-783391         │ jenkins │ v1.37.0 │ 10 Dec 25 08:38 UTC │ 10 Dec 25 08:38 UTC │
	│ ssh     │ -p NoKubernetes-783391 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-783391         │ jenkins │ v1.37.0 │ 10 Dec 25 08:38 UTC │                     │
	│ delete  │ -p NoKubernetes-783391                                                                                                                          │ NoKubernetes-783391         │ jenkins │ v1.37.0 │ 10 Dec 25 08:38 UTC │ 10 Dec 25 08:38 UTC │
	│ start   │ -p missing-upgrade-317974 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-317974      │ jenkins │ v1.35.0 │ 10 Dec 25 08:38 UTC │ 10 Dec 25 08:39 UTC │
	│ start   │ -p pause-767596 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-767596                │ jenkins │ v1.37.0 │ 10 Dec 25 08:38 UTC │ 10 Dec 25 08:40 UTC │
	│ start   │ -p missing-upgrade-317974 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-317974      │ jenkins │ v1.37.0 │ 10 Dec 25 08:39 UTC │ 10 Dec 25 08:39 UTC │
	│ delete  │ -p missing-upgrade-317974                                                                                                                       │ missing-upgrade-317974      │ jenkins │ v1.37.0 │ 10 Dec 25 08:39 UTC │ 10 Dec 25 08:39 UTC │
	│ start   │ -p kubernetes-upgrade-470056 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-470056   │ jenkins │ v1.37.0 │ 10 Dec 25 08:39 UTC │ 10 Dec 25 08:40 UTC │
	│ stop    │ -p kubernetes-upgrade-470056                                                                                                                    │ kubernetes-upgrade-470056   │ jenkins │ v1.37.0 │ 10 Dec 25 08:40 UTC │ 10 Dec 25 08:40 UTC │
	│ start   │ -p kubernetes-upgrade-470056 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-470056   │ jenkins │ v1.37.0 │ 10 Dec 25 08:40 UTC │                     │
	│ pause   │ -p pause-767596 --alsologtostderr -v=5                                                                                                          │ pause-767596                │ jenkins │ v1.37.0 │ 10 Dec 25 08:40 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 08:40:30
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 08:40:30.040162  559515 out.go:360] Setting OutFile to fd 1 ...
	I1210 08:40:30.040316  559515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:40:30.040328  559515 out.go:374] Setting ErrFile to fd 2...
	I1210 08:40:30.040334  559515 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:40:30.040721  559515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 08:40:30.041220  559515 out.go:368] Setting JSON to false
	I1210 08:40:30.042306  559515 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12180,"bootTime":1765343850,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 08:40:30.042423  559515 start.go:143] virtualization:  
	I1210 08:40:30.051546  559515 out.go:179] * [kubernetes-upgrade-470056] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 08:40:30.054446  559515 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 08:40:30.054564  559515 notify.go:221] Checking for updates...
	I1210 08:40:30.060171  559515 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 08:40:30.062908  559515 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 08:40:30.065796  559515 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 08:40:30.068574  559515 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 08:40:30.071374  559515 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 08:40:30.074806  559515 config.go:182] Loaded profile config "kubernetes-upgrade-470056": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1210 08:40:30.075505  559515 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 08:40:30.145265  559515 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 08:40:30.145438  559515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 08:40:30.244737  559515 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 08:40:30.2317479 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 08:40:30.244848  559515 docker.go:319] overlay module found
	I1210 08:40:30.247847  559515 out.go:179] * Using the docker driver based on existing profile
	I1210 08:40:30.250588  559515 start.go:309] selected driver: docker
	I1210 08:40:30.250608  559515 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-470056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-470056 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 08:40:30.250732  559515 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 08:40:30.251479  559515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 08:40:30.368120  559515 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 08:40:30.354165413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 08:40:30.368455  559515 cni.go:84] Creating CNI manager for ""
	I1210 08:40:30.368523  559515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 08:40:30.368566  559515 start.go:353] cluster config:
	{Name:kubernetes-upgrade-470056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-470056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 08:40:30.371722  559515 out.go:179] * Starting "kubernetes-upgrade-470056" primary control-plane node in "kubernetes-upgrade-470056" cluster
	I1210 08:40:30.374452  559515 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 08:40:30.377369  559515 out.go:179] * Pulling base image v0.0.48-1765319469-22089 ...
	I1210 08:40:30.380261  559515 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 08:40:30.380315  559515 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1210 08:40:30.380330  559515 cache.go:65] Caching tarball of preloaded images
	I1210 08:40:30.380421  559515 preload.go:238] Found /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1210 08:40:30.380436  559515 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1210 08:40:30.380553  559515 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/config.json ...
	I1210 08:40:30.380752  559515 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 08:40:30.410489  559515 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon, skipping pull
	I1210 08:40:30.410515  559515 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca exists in daemon, skipping load
	I1210 08:40:30.410530  559515 cache.go:243] Successfully downloaded all kic artifacts
	I1210 08:40:30.410565  559515 start.go:360] acquireMachinesLock for kubernetes-upgrade-470056: {Name:mk76103b2f0fae4fa69e0d1baba03cd5feffd6fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 08:40:30.410624  559515 start.go:364] duration metric: took 35.045µs to acquireMachinesLock for "kubernetes-upgrade-470056"
	I1210 08:40:30.410645  559515 start.go:96] Skipping create...Using existing machine configuration
	I1210 08:40:30.410655  559515 fix.go:54] fixHost starting: 
	I1210 08:40:30.410917  559515 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-470056 --format={{.State.Status}}
	I1210 08:40:30.440722  559515 fix.go:112] recreateIfNeeded on kubernetes-upgrade-470056: state=Stopped err=<nil>
	W1210 08:40:30.440757  559515 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 08:40:30.443984  559515 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-470056" ...
	I1210 08:40:30.444088  559515 cli_runner.go:164] Run: docker start kubernetes-upgrade-470056
	I1210 08:40:30.811799  559515 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-470056 --format={{.State.Status}}
	I1210 08:40:30.848700  559515 kic.go:430] container "kubernetes-upgrade-470056" state is running.
	I1210 08:40:30.849107  559515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-470056
	I1210 08:40:30.887232  559515 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/config.json ...
	I1210 08:40:30.887464  559515 machine.go:94] provisionDockerMachine start ...
	I1210 08:40:30.887537  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:30.917139  559515 main.go:143] libmachine: Using SSH client type: native
	I1210 08:40:30.917487  559515 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1210 08:40:30.917497  559515 main.go:143] libmachine: About to run SSH command:
	hostname
	I1210 08:40:30.918619  559515 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1210 08:40:34.090535  559515 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-470056
	
	I1210 08:40:34.090615  559515 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-470056"
	I1210 08:40:34.090716  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:34.128340  559515 main.go:143] libmachine: Using SSH client type: native
	I1210 08:40:34.128657  559515 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1210 08:40:34.128673  559515 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-470056 && echo "kubernetes-upgrade-470056" | sudo tee /etc/hostname
	I1210 08:40:34.297852  559515 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-470056
	
	I1210 08:40:34.298008  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:34.376678  559515 main.go:143] libmachine: Using SSH client type: native
	I1210 08:40:34.376987  559515 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1210 08:40:34.377003  559515 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-470056' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-470056/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-470056' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 08:40:34.547600  559515 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1210 08:40:34.547679  559515 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22089-376671/.minikube CaCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22089-376671/.minikube}
	I1210 08:40:34.547721  559515 ubuntu.go:190] setting up certificates
	I1210 08:40:34.547762  559515 provision.go:84] configureAuth start
	I1210 08:40:34.547888  559515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-470056
	I1210 08:40:34.573984  559515 provision.go:143] copyHostCerts
	I1210 08:40:34.574057  559515 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem, removing ...
	I1210 08:40:34.574066  559515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem
	I1210 08:40:34.574142  559515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/cert.pem (1123 bytes)
	I1210 08:40:34.574236  559515 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem, removing ...
	I1210 08:40:34.574241  559515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem
	I1210 08:40:34.574266  559515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/key.pem (1675 bytes)
	I1210 08:40:34.574315  559515 exec_runner.go:144] found /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem, removing ...
	I1210 08:40:34.574320  559515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem
	I1210 08:40:34.574343  559515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22089-376671/.minikube/ca.pem (1082 bytes)
	I1210 08:40:34.574398  559515 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-470056 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-470056 localhost minikube]
	I1210 08:40:34.686160  559515 provision.go:177] copyRemoteCerts
	I1210 08:40:34.686273  559515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 08:40:34.686347  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:34.722343  559515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/kubernetes-upgrade-470056/id_rsa Username:docker}
	I1210 08:40:34.832542  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1210 08:40:34.861570  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1210 08:40:34.896052  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 08:40:34.921367  559515 provision.go:87] duration metric: took 373.563126ms to configureAuth
	I1210 08:40:34.921450  559515 ubuntu.go:206] setting minikube options for container-runtime
	I1210 08:40:34.921682  559515 config.go:182] Loaded profile config "kubernetes-upgrade-470056": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 08:40:34.921855  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:34.946408  559515 main.go:143] libmachine: Using SSH client type: native
	I1210 08:40:34.946719  559515 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1210 08:40:34.946736  559515 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 08:40:34.617615  550743 node_ready.go:49] node "pause-767596" is "Ready"
	I1210 08:40:34.617651  550743 node_ready.go:38] duration metric: took 10.892704746s for node "pause-767596" to be "Ready" ...
	I1210 08:40:34.617666  550743 api_server.go:52] waiting for apiserver process to appear ...
	I1210 08:40:34.617738  550743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:40:34.646308  550743 api_server.go:72] duration metric: took 11.238014636s to wait for apiserver process to appear ...
	I1210 08:40:34.646335  550743 api_server.go:88] waiting for apiserver healthz status ...
	I1210 08:40:34.646354  550743 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 08:40:34.738851  550743 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 08:40:34.738885  550743 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 08:40:35.147100  550743 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 08:40:35.165835  550743 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 08:40:35.165928  550743 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 08:40:35.646486  550743 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 08:40:35.654761  550743 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 08:40:35.654839  550743 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 08:40:35.353912  559515 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 08:40:35.353936  559515 machine.go:97] duration metric: took 4.466461328s to provisionDockerMachine
	I1210 08:40:35.353947  559515 start.go:293] postStartSetup for "kubernetes-upgrade-470056" (driver="docker")
	I1210 08:40:35.353983  559515 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 08:40:35.354100  559515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 08:40:35.354167  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:35.374974  559515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/kubernetes-upgrade-470056/id_rsa Username:docker}
	I1210 08:40:35.493386  559515 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 08:40:35.497233  559515 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1210 08:40:35.497304  559515 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1210 08:40:35.497336  559515 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/addons for local assets ...
	I1210 08:40:35.497408  559515 filesync.go:126] Scanning /home/jenkins/minikube-integration/22089-376671/.minikube/files for local assets ...
	I1210 08:40:35.497512  559515 filesync.go:149] local asset: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem -> 3785282.pem in /etc/ssl/certs
	I1210 08:40:35.497643  559515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 08:40:35.510121  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 08:40:35.541645  559515 start.go:296] duration metric: took 187.683107ms for postStartSetup
	I1210 08:40:35.541805  559515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 08:40:35.541878  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:35.569359  559515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/kubernetes-upgrade-470056/id_rsa Username:docker}
	I1210 08:40:35.679440  559515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1210 08:40:35.687359  559515 fix.go:56] duration metric: took 5.276695711s for fixHost
	I1210 08:40:35.687391  559515 start.go:83] releasing machines lock for "kubernetes-upgrade-470056", held for 5.276755272s
	I1210 08:40:35.687546  559515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-470056
	I1210 08:40:35.715293  559515 ssh_runner.go:195] Run: cat /version.json
	I1210 08:40:35.715345  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:35.715591  559515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 08:40:35.715643  559515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-470056
	I1210 08:40:35.747155  559515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/kubernetes-upgrade-470056/id_rsa Username:docker}
	I1210 08:40:35.760447  559515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/kubernetes-upgrade-470056/id_rsa Username:docker}
	I1210 08:40:35.875652  559515 ssh_runner.go:195] Run: systemctl --version
	I1210 08:40:35.992980  559515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 08:40:36.060420  559515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 08:40:36.068629  559515 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 08:40:36.068701  559515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 08:40:36.080402  559515 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 08:40:36.080426  559515 start.go:496] detecting cgroup driver to use...
	I1210 08:40:36.080457  559515 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1210 08:40:36.080505  559515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 08:40:36.097918  559515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 08:40:36.113386  559515 docker.go:218] disabling cri-docker service (if available) ...
	I1210 08:40:36.113506  559515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 08:40:36.140941  559515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 08:40:36.164336  559515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 08:40:36.341219  559515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 08:40:36.510314  559515 docker.go:234] disabling docker service ...
	I1210 08:40:36.510436  559515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 08:40:36.544014  559515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 08:40:36.565058  559515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 08:40:36.755122  559515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 08:40:36.954246  559515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 08:40:36.973758  559515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 08:40:36.998372  559515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1210 08:40:36.998450  559515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:40:37.011466  559515 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 08:40:37.011569  559515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:40:37.024093  559515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:40:37.038124  559515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:40:37.050180  559515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 08:40:37.059304  559515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:40:37.069187  559515 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:40:37.078300  559515 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 08:40:37.088529  559515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 08:40:37.097305  559515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 08:40:37.105542  559515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 08:40:37.219171  559515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 08:40:37.383269  559515 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 08:40:37.383392  559515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 08:40:37.387716  559515 start.go:564] Will wait 60s for crictl version
	I1210 08:40:37.387825  559515 ssh_runner.go:195] Run: which crictl
	I1210 08:40:37.392809  559515 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1210 08:40:37.427720  559515 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1210 08:40:37.427849  559515 ssh_runner.go:195] Run: crio --version
	I1210 08:40:37.463991  559515 ssh_runner.go:195] Run: crio --version
	I1210 08:40:37.501980  559515 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1210 08:40:37.505579  559515 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-470056 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1210 08:40:37.527286  559515 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1210 08:40:37.531117  559515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 08:40:37.540886  559515 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-470056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-470056 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 08:40:37.541005  559515 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1210 08:40:37.541068  559515 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 08:40:37.575189  559515 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1210 08:40:37.575294  559515 ssh_runner.go:195] Run: which lz4
	I1210 08:40:37.579111  559515 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 08:40:37.583005  559515 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 08:40:37.583068  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (306100841 bytes)
	I1210 08:40:39.230262  559515 crio.go:462] duration metric: took 1.651193004s to copy over tarball
	I1210 08:40:39.230351  559515 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 08:40:36.147310  550743 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1210 08:40:36.178199  550743 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1210 08:40:36.184682  550743 api_server.go:141] control plane version: v1.34.2
	I1210 08:40:36.184709  550743 api_server.go:131] duration metric: took 1.538367022s to wait for apiserver health ...
	I1210 08:40:36.184717  550743 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 08:40:36.197515  550743 system_pods.go:59] 7 kube-system pods found
	I1210 08:40:36.197552  550743 system_pods.go:61] "coredns-66bc5c9577-7r54s" [621f03c5-af3a-4a79-9ee4-b28c576f8a3a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 08:40:36.197560  550743 system_pods.go:61] "etcd-pause-767596" [a7bd6225-3434-45f5-93b9-e831b00e11ca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 08:40:36.197566  550743 system_pods.go:61] "kindnet-kx2vt" [475678b0-6300-4037-8f4c-dbc6a7b12cb7] Running
	I1210 08:40:36.197572  550743 system_pods.go:61] "kube-apiserver-pause-767596" [16b63525-5339-4b20-845a-d458efe96c7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 08:40:36.197578  550743 system_pods.go:61] "kube-controller-manager-pause-767596" [c2cc657a-37f3-4a66-82a7-eba45b2fcbfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 08:40:36.197582  550743 system_pods.go:61] "kube-proxy-p4g2s" [6a1da39c-5fd9-47dc-8874-9dc207729443] Running
	I1210 08:40:36.197587  550743 system_pods.go:61] "kube-scheduler-pause-767596" [4f7090fa-0901-487a-bd5e-b4b358773fa0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 08:40:36.197593  550743 system_pods.go:74] duration metric: took 12.870088ms to wait for pod list to return data ...
	I1210 08:40:36.197601  550743 default_sa.go:34] waiting for default service account to be created ...
	I1210 08:40:36.204915  550743 default_sa.go:45] found service account: "default"
	I1210 08:40:36.204937  550743 default_sa.go:55] duration metric: took 7.330159ms for default service account to be created ...
	I1210 08:40:36.204947  550743 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 08:40:36.294173  550743 system_pods.go:86] 7 kube-system pods found
	I1210 08:40:36.294268  550743 system_pods.go:89] "coredns-66bc5c9577-7r54s" [621f03c5-af3a-4a79-9ee4-b28c576f8a3a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 08:40:36.294292  550743 system_pods.go:89] "etcd-pause-767596" [a7bd6225-3434-45f5-93b9-e831b00e11ca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 08:40:36.294333  550743 system_pods.go:89] "kindnet-kx2vt" [475678b0-6300-4037-8f4c-dbc6a7b12cb7] Running
	I1210 08:40:36.294359  550743 system_pods.go:89] "kube-apiserver-pause-767596" [16b63525-5339-4b20-845a-d458efe96c7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 08:40:36.294382  550743 system_pods.go:89] "kube-controller-manager-pause-767596" [c2cc657a-37f3-4a66-82a7-eba45b2fcbfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 08:40:36.294417  550743 system_pods.go:89] "kube-proxy-p4g2s" [6a1da39c-5fd9-47dc-8874-9dc207729443] Running
	I1210 08:40:36.294441  550743 system_pods.go:89] "kube-scheduler-pause-767596" [4f7090fa-0901-487a-bd5e-b4b358773fa0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 08:40:36.294463  550743 system_pods.go:126] duration metric: took 89.509985ms to wait for k8s-apps to be running ...
	I1210 08:40:36.294500  550743 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 08:40:36.294587  550743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 08:40:36.323019  550743 system_svc.go:56] duration metric: took 28.50256ms WaitForService to wait for kubelet
	I1210 08:40:36.323095  550743 kubeadm.go:587] duration metric: took 12.914806514s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 08:40:36.323132  550743 node_conditions.go:102] verifying NodePressure condition ...
	I1210 08:40:36.331911  550743 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1210 08:40:36.331989  550743 node_conditions.go:123] node cpu capacity is 2
	I1210 08:40:36.332041  550743 node_conditions.go:105] duration metric: took 8.885375ms to run NodePressure ...
	I1210 08:40:36.332072  550743 start.go:242] waiting for startup goroutines ...
	I1210 08:40:36.332094  550743 start.go:247] waiting for cluster config update ...
	I1210 08:40:36.332126  550743 start.go:256] writing updated cluster config ...
	I1210 08:40:36.332479  550743 ssh_runner.go:195] Run: rm -f paused
	I1210 08:40:36.337822  550743 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 08:40:36.338344  550743 kapi.go:59] client config for pause-767596: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/pause-767596/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/pause-767596/client.key", CAFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 08:40:36.407839  550743 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7r54s" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:40:38.415041  550743 pod_ready.go:94] pod "coredns-66bc5c9577-7r54s" is "Ready"
	I1210 08:40:38.415070  550743 pod_ready.go:86] duration metric: took 2.007204238s for pod "coredns-66bc5c9577-7r54s" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:40:38.419883  550743 pod_ready.go:83] waiting for pod "etcd-pause-767596" in "kube-system" namespace to be "Ready" or be gone ...
	W1210 08:40:40.426849  550743 pod_ready.go:104] pod "etcd-pause-767596" is not "Ready", error: <nil>
	W1210 08:40:42.949567  550743 pod_ready.go:104] pod "etcd-pause-767596" is not "Ready", error: <nil>
	I1210 08:40:43.427007  550743 pod_ready.go:94] pod "etcd-pause-767596" is "Ready"
	I1210 08:40:43.427044  550743 pod_ready.go:86] duration metric: took 5.007131635s for pod "etcd-pause-767596" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:40:43.429937  550743 pod_ready.go:83] waiting for pod "kube-apiserver-pause-767596" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:40:43.436011  550743 pod_ready.go:94] pod "kube-apiserver-pause-767596" is "Ready"
	I1210 08:40:43.436046  550743 pod_ready.go:86] duration metric: took 6.087964ms for pod "kube-apiserver-pause-767596" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:40:43.439451  550743 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-767596" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:40:43.445121  550743 pod_ready.go:94] pod "kube-controller-manager-pause-767596" is "Ready"
	I1210 08:40:43.445145  550743 pod_ready.go:86] duration metric: took 5.664485ms for pod "kube-controller-manager-pause-767596" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:40:43.448754  550743 pod_ready.go:83] waiting for pod "kube-proxy-p4g2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:40:43.625336  550743 pod_ready.go:94] pod "kube-proxy-p4g2s" is "Ready"
	I1210 08:40:43.625382  550743 pod_ready.go:86] duration metric: took 176.567982ms for pod "kube-proxy-p4g2s" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:40:43.825217  550743 pod_ready.go:83] waiting for pod "kube-scheduler-pause-767596" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:40:44.224944  550743 pod_ready.go:94] pod "kube-scheduler-pause-767596" is "Ready"
	I1210 08:40:44.224983  550743 pod_ready.go:86] duration metric: took 399.736317ms for pod "kube-scheduler-pause-767596" in "kube-system" namespace to be "Ready" or be gone ...
	I1210 08:40:44.224997  550743 pod_ready.go:40] duration metric: took 7.887141637s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1210 08:40:44.309595  550743 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1210 08:40:44.315354  550743 out.go:179] * Done! kubectl is now configured to use "pause-767596" cluster and "default" namespace by default
	I1210 08:40:41.539329  559515 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.308951206s)
	I1210 08:40:41.539353  559515 crio.go:469] duration metric: took 2.309048692s to extract the tarball
	I1210 08:40:41.539360  559515 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 08:40:41.680093  559515 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 08:40:41.716512  559515 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 08:40:41.716537  559515 cache_images.go:86] Images are preloaded, skipping loading
	I1210 08:40:41.716551  559515 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0-beta.0 crio true true} ...
	I1210 08:40:41.716645  559515 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-470056 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-470056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 08:40:41.716748  559515 ssh_runner.go:195] Run: crio config
	I1210 08:40:41.794006  559515 cni.go:84] Creating CNI manager for ""
	I1210 08:40:41.794030  559515 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 08:40:41.794054  559515 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1210 08:40:41.794078  559515 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-470056 NodeName:kubernetes-upgrade-470056 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 08:40:41.794211  559515 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-470056"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 08:40:41.794290  559515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1210 08:40:41.803239  559515 binaries.go:51] Found k8s binaries, skipping transfer
	I1210 08:40:41.803363  559515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 08:40:41.810785  559515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1210 08:40:41.824511  559515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1210 08:40:41.845081  559515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1210 08:40:41.871774  559515 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1210 08:40:41.875920  559515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 08:40:41.888908  559515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 08:40:42.014653  559515 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 08:40:42.034732  559515 certs.go:69] Setting up /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056 for IP: 192.168.85.2
	I1210 08:40:42.034805  559515 certs.go:195] generating shared ca certs ...
	I1210 08:40:42.034838  559515 certs.go:227] acquiring lock for ca certs: {Name:mk8f0c12407c084bd20b81428ac0dabca9a8cbd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:40:42.035067  559515 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key
	I1210 08:40:42.035169  559515 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key
	I1210 08:40:42.035207  559515 certs.go:257] generating profile certs ...
	I1210 08:40:42.035342  559515 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/client.key
	I1210 08:40:42.035478  559515 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/apiserver.key.45c47546
	I1210 08:40:42.035578  559515 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/proxy-client.key
	I1210 08:40:42.035748  559515 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem (1338 bytes)
	W1210 08:40:42.035825  559515 certs.go:480] ignoring /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528_empty.pem, impossibly tiny 0 bytes
	I1210 08:40:42.035863  559515 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 08:40:42.035926  559515 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/ca.pem (1082 bytes)
	I1210 08:40:42.035986  559515 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/cert.pem (1123 bytes)
	I1210 08:40:42.036043  559515 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/certs/key.pem (1675 bytes)
	I1210 08:40:42.036135  559515 certs.go:484] found cert: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem (1708 bytes)
	I1210 08:40:42.036922  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 08:40:42.079780  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 08:40:42.128421  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 08:40:42.179894  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 08:40:42.203313  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1210 08:40:42.226109  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 08:40:42.248667  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 08:40:42.271268  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 08:40:42.292926  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/certs/378528.pem --> /usr/share/ca-certificates/378528.pem (1338 bytes)
	I1210 08:40:42.313701  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/ssl/certs/3785282.pem --> /usr/share/ca-certificates/3785282.pem (1708 bytes)
	I1210 08:40:42.334504  559515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 08:40:42.354555  559515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 08:40:42.368724  559515 ssh_runner.go:195] Run: openssl version
	I1210 08:40:42.377006  559515 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3785282.pem
	I1210 08:40:42.385181  559515 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3785282.pem /etc/ssl/certs/3785282.pem
	I1210 08:40:42.394792  559515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3785282.pem
	I1210 08:40:42.398655  559515 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 10 07:35 /usr/share/ca-certificates/3785282.pem
	I1210 08:40:42.398722  559515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3785282.pem
	I1210 08:40:42.439750  559515 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1210 08:40:42.447349  559515 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:40:42.454987  559515 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1210 08:40:42.464323  559515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:40:42.468227  559515 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 10 07:25 /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:40:42.468337  559515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 08:40:42.511301  559515 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1210 08:40:42.519128  559515 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/378528.pem
	I1210 08:40:42.527862  559515 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/378528.pem /etc/ssl/certs/378528.pem
	I1210 08:40:42.535897  559515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378528.pem
	I1210 08:40:42.539729  559515 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 10 07:35 /usr/share/ca-certificates/378528.pem
	I1210 08:40:42.539837  559515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378528.pem
	I1210 08:40:42.580647  559515 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1210 08:40:42.588135  559515 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 08:40:42.591981  559515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 08:40:42.634050  559515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 08:40:42.676821  559515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 08:40:42.719106  559515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 08:40:42.761009  559515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 08:40:42.802912  559515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 08:40:42.845199  559515 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-470056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-470056 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 08:40:42.845325  559515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 08:40:42.845407  559515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 08:40:42.919731  559515 cri.go:89] found id: ""
	I1210 08:40:42.919869  559515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 08:40:42.939688  559515 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1210 08:40:42.939759  559515 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1210 08:40:42.939836  559515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 08:40:42.951898  559515 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 08:40:42.952491  559515 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-470056" does not appear in /home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 08:40:42.952776  559515 kubeconfig.go:62] /home/jenkins/minikube-integration/22089-376671/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-470056" cluster setting kubeconfig missing "kubernetes-upgrade-470056" context setting]
	I1210 08:40:42.953270  559515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/kubeconfig: {Name:mk62f1b4a63164643ed31a5211abdadfc49db863 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 08:40:42.953988  559515 kapi.go:59] client config for kubernetes-upgrade-470056: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/client.crt", KeyFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/profiles/kubernetes-upgrade-470056/client.key", CAFile:"/home/jenkins/minikube-integration/22089-376671/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 08:40:42.954742  559515 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 08:40:42.954787  559515 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1210 08:40:42.954808  559515 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1210 08:40:42.954828  559515 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 08:40:42.954858  559515 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1210 08:40:42.955197  559515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 08:40:42.969776  559515 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-10 08:40:09.943357327 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-10 08:40:41.863683282 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.85.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-470056"
	   kubeletExtraArgs:
	-    node-ip: 192.168.85.2
	+    - name: "node-ip"
	+      value: "192.168.85.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1210 08:40:42.969838  559515 kubeadm.go:1161] stopping kube-system containers ...
	I1210 08:40:42.969864  559515 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 08:40:42.969937  559515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 08:40:43.022682  559515 cri.go:89] found id: ""
	I1210 08:40:43.022839  559515 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 08:40:43.042594  559515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 08:40:43.051165  559515 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Dec 10 08:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 10 08:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 10 08:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 10 08:40 /etc/kubernetes/scheduler.conf
	
	I1210 08:40:43.051260  559515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 08:40:43.060495  559515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 08:40:43.070827  559515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 08:40:43.079515  559515 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 08:40:43.079584  559515 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 08:40:43.087808  559515 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 08:40:43.096558  559515 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 08:40:43.096678  559515 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 08:40:43.104568  559515 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 08:40:43.116430  559515 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 08:40:43.166741  559515 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 08:40:44.689612  559515 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.52278604s)
	I1210 08:40:44.689688  559515 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 08:40:44.978902  559515 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	
	
	==> CRI-O <==
	Dec 10 08:40:27 pause-767596 crio[2144]: time="2025-12-10T08:40:27.939293031Z" level=info msg="Started container" PID=2483 containerID=d6c7b4fc3031146f4408f3b9fc1855ea7b8c17881d51899d60b47dc3ac1095ad description=kube-system/kube-scheduler-pause-767596/kube-scheduler id=f9cc9a1f-e031-434b-ad19-6e575ebd181d name=/runtime.v1.RuntimeService/StartContainer sandboxID=8b6c36ecf49986e23063f8c103db8c6814c9c467a7e907e2834335b15df918bc
	Dec 10 08:40:27 pause-767596 crio[2144]: time="2025-12-10T08:40:27.942821502Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 08:40:27 pause-767596 crio[2144]: time="2025-12-10T08:40:27.957860613Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 10 08:40:27 pause-767596 crio[2144]: time="2025-12-10T08:40:27.965599351Z" level=info msg="Started container" PID=2484 containerID=195fe55f5ac4a49a5f8ab6ad8e4d47c8eda98becee1742359aeff6075452e689 description=kube-system/etcd-pause-767596/etcd id=95a92f52-377a-4431-87e9-eec688bef20e name=/runtime.v1.RuntimeService/StartContainer sandboxID=46f5a3c85aab673bd2daf37e5551a3c4b0ee089425e162224498414a9bb30f90
	Dec 10 08:40:27 pause-767596 crio[2144]: time="2025-12-10T08:40:27.996975425Z" level=info msg="Created container 0cbf424c207d46886a3438d2ac328e0162fc287f9259049da31f809885fae9ca: kube-system/kube-apiserver-pause-767596/kube-apiserver" id=925cbd64-0622-4b32-bf91-f6e8dfe8d145 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 08:40:27 pause-767596 crio[2144]: time="2025-12-10T08:40:27.997985839Z" level=info msg="Starting container: 0cbf424c207d46886a3438d2ac328e0162fc287f9259049da31f809885fae9ca" id=f7d5d787-74ec-4aef-9d13-795b48519abf name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 08:40:28 pause-767596 crio[2144]: time="2025-12-10T08:40:28.002255791Z" level=info msg="Started container" PID=2513 containerID=0cbf424c207d46886a3438d2ac328e0162fc287f9259049da31f809885fae9ca description=kube-system/kube-apiserver-pause-767596/kube-apiserver id=f7d5d787-74ec-4aef-9d13-795b48519abf name=/runtime.v1.RuntimeService/StartContainer sandboxID=66c53b93e01ff43370b2463b5056791d03dc6e17cde52b70f3ef0e7d5c07020e
	Dec 10 08:40:28 pause-767596 crio[2144]: time="2025-12-10T08:40:28.304356312Z" level=info msg="Created container 23d088000dc21df1eea635b974d19360a5e49f3f60043678d31c51ca0026677f: kube-system/kube-proxy-p4g2s/kube-proxy" id=80dc2755-f8e5-4410-ae87-8348b670a07e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 10 08:40:28 pause-767596 crio[2144]: time="2025-12-10T08:40:28.305033808Z" level=info msg="Starting container: 23d088000dc21df1eea635b974d19360a5e49f3f60043678d31c51ca0026677f" id=c8294f6d-e153-483b-a747-a7ebcc411dc5 name=/runtime.v1.RuntimeService/StartContainer
	Dec 10 08:40:28 pause-767596 crio[2144]: time="2025-12-10T08:40:28.307925932Z" level=info msg="Started container" PID=2512 containerID=23d088000dc21df1eea635b974d19360a5e49f3f60043678d31c51ca0026677f description=kube-system/kube-proxy-p4g2s/kube-proxy id=c8294f6d-e153-483b-a747-a7ebcc411dc5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=11eb6a592222a33bac549e35894003b03b6ab247b291c71ec72b605201c2b7a1
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.532741716Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.543123386Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.543159021Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.543176391Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.546877278Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.547063578Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.547141806Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.551159234Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.551196814Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.551219468Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.554429799Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.554465032Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.55448785Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.558483838Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 10 08:40:43 pause-767596 crio[2144]: time="2025-12-10T08:40:43.558522821Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	0cbf424c207d4       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   21 seconds ago       Running             kube-apiserver            1                   66c53b93e01ff       kube-apiserver-pause-767596            kube-system
	23d088000dc21       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   21 seconds ago       Running             kube-proxy                1                   11eb6a592222a       kube-proxy-p4g2s                       kube-system
	195fe55f5ac4a       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   21 seconds ago       Running             etcd                      1                   46f5a3c85aab6       etcd-pause-767596                      kube-system
	d6c7b4fc30311       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   21 seconds ago       Running             kube-scheduler            1                   8b6c36ecf4998       kube-scheduler-pause-767596            kube-system
	d7674cb30a043       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   21 seconds ago       Running             kube-controller-manager   1                   640bc27225613       kube-controller-manager-pause-767596   kube-system
	3041c7738207a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   26 seconds ago       Running             coredns                   2                   d49ccd7fa1884       coredns-66bc5c9577-7r54s               kube-system
	4d1d5992cc4a7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   26 seconds ago       Running             kindnet-cni               2                   ca68b9d4dabd6       kindnet-kx2vt                          kube-system
	a6d2f0e2f3d9d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   About a minute ago   Created             coredns                   1                   d49ccd7fa1884       coredns-66bc5c9577-7r54s               kube-system
	794b94a9f483a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Created             kindnet-cni               1                   ca68b9d4dabd6       kindnet-kx2vt                          kube-system
	2e2fa03d36412       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   2 minutes ago        Exited              coredns                   0                   d49ccd7fa1884       coredns-66bc5c9577-7r54s               kube-system
	022741f8c7a58       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   2 minutes ago        Exited              kube-proxy                0                   11eb6a592222a       kube-proxy-p4g2s                       kube-system
	b9c23cab83e39       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 minutes ago        Exited              kindnet-cni               0                   ca68b9d4dabd6       kindnet-kx2vt                          kube-system
	c1c1db525792c       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   3 minutes ago        Exited              etcd                      0                   46f5a3c85aab6       etcd-pause-767596                      kube-system
	d799933dbf1f7       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   3 minutes ago        Exited              kube-apiserver            0                   66c53b93e01ff       kube-apiserver-pause-767596            kube-system
	81209469877d5       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   3 minutes ago        Exited              kube-scheduler            0                   8b6c36ecf4998       kube-scheduler-pause-767596            kube-system
	f04f090d9d5b7       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   3 minutes ago        Exited              kube-controller-manager   0                   640bc27225613       kube-controller-manager-pause-767596   kube-system
	
	
	==> coredns [2e2fa03d36412a2c91c2f76d767f6e2e9e591aa5b021bfb00f6f513ec0bfd307] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58816 - 33210 "HINFO IN 8469837959569984790.7529404179972013098. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.041734054s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3041c7738207a5c5b2d647d810aafb92c70d64ab842ac4e8145635706f896e61] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46857 - 10465 "HINFO IN 6515730640656355756.8113294786332208993. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015096539s
	
	
	==> coredns [a6d2f0e2f3d9d16f6e4e55ca9f54961f4e0d50dcc4d20da2b0505e4622433a37] <==
	
	
	==> describe nodes <==
	Name:               pause-767596
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-767596
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=edc6abd3c0573b88c7a02dc35aa0b985627fa3e9
	                    minikube.k8s.io/name=pause-767596
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_10T08_37_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 10 Dec 2025 08:37:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-767596
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 10 Dec 2025 08:40:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 10 Dec 2025 08:38:37 +0000   Wed, 10 Dec 2025 08:37:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 10 Dec 2025 08:38:37 +0000   Wed, 10 Dec 2025 08:37:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 10 Dec 2025 08:38:37 +0000   Wed, 10 Dec 2025 08:37:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 10 Dec 2025 08:38:37 +0000   Wed, 10 Dec 2025 08:38:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-767596
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 0bfdf75342fda7ce4dcc05536938a4f8
	  System UUID:                fd128a2e-8483-4dc7-b52d-1cda387dfee2
	  Boot ID:                    9ae06026-ffc7-4eb4-912b-d54adcad0f66
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-7r54s                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m54s
	  kube-system                 etcd-pause-767596                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         3m
	  kube-system                 kindnet-kx2vt                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m54s
	  kube-system                 kube-apiserver-pause-767596             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m
	  kube-system                 kube-controller-manager-pause-767596    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m
	  kube-system                 kube-proxy-p4g2s                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  kube-system                 kube-scheduler-pause-767596             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m52s                  kube-proxy       
	  Normal   Starting                 13s                    kube-proxy       
	  Warning  CgroupV1                 3m12s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  3m12s (x8 over 3m12s)  kubelet          Node pause-767596 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m12s (x8 over 3m12s)  kubelet          Node pause-767596 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m12s (x8 over 3m12s)  kubelet          Node pause-767596 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  3m                     kubelet          Node pause-767596 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 3m                     kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    3m                     kubelet          Node pause-767596 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m                     kubelet          Node pause-767596 status is now: NodeHasSufficientPID
	  Normal   Starting                 3m                     kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m55s                  node-controller  Node pause-767596 event: Registered Node pause-767596 in Controller
	  Normal   NodeReady                2m13s                  kubelet          Node pause-767596 status is now: NodeReady
	  Warning  ContainerGCFailed        60s                    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             25s (x7 over 87s)      kubelet          Node pause-767596 status is now: NodeNotReady
	  Normal   RegisteredNode           13s                    node-controller  Node pause-767596 event: Registered Node pause-767596 in Controller
	
	
	==> dmesg <==
	[  +3.809013] overlayfs: idmapped layers are currently not supported
	[Dec10 08:14] overlayfs: idmapped layers are currently not supported
	[ +34.529466] overlayfs: idmapped layers are currently not supported
	[Dec10 08:15] overlayfs: idmapped layers are currently not supported
	[  +3.847763] overlayfs: idmapped layers are currently not supported
	[Dec10 08:17] overlayfs: idmapped layers are currently not supported
	[Dec10 08:18] overlayfs: idmapped layers are currently not supported
	[Dec10 08:20] overlayfs: idmapped layers are currently not supported
	[Dec10 08:24] overlayfs: idmapped layers are currently not supported
	[Dec10 08:25] overlayfs: idmapped layers are currently not supported
	[Dec10 08:26] overlayfs: idmapped layers are currently not supported
	[Dec10 08:27] overlayfs: idmapped layers are currently not supported
	[Dec10 08:28] overlayfs: idmapped layers are currently not supported
	[Dec10 08:30] overlayfs: idmapped layers are currently not supported
	[ +17.507086] overlayfs: idmapped layers are currently not supported
	[Dec10 08:31] overlayfs: idmapped layers are currently not supported
	[ +48.274286] overlayfs: idmapped layers are currently not supported
	[Dec10 08:32] overlayfs: idmapped layers are currently not supported
	[ +48.206918] overlayfs: idmapped layers are currently not supported
	[Dec10 08:33] overlayfs: idmapped layers are currently not supported
	[Dec10 08:34] overlayfs: idmapped layers are currently not supported
	[Dec10 08:35] overlayfs: idmapped layers are currently not supported
	[Dec10 08:37] overlayfs: idmapped layers are currently not supported
	[  +1.400670] overlayfs: idmapped layers are currently not supported
	[Dec10 08:40] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [195fe55f5ac4a49a5f8ab6ad8e4d47c8eda98becee1742359aeff6075452e689] <==
	{"level":"warn","ts":"2025-12-10T08:40:32.572044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.606857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.636293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.676994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.717059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.805528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.838610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.855479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.901389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.916815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.954352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.975348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:32.991722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.034153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.059993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.070494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.082695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.105372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.138837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.159410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.200263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.233848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.268352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.287087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:40:33.344876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41712","server-name":"","error":"EOF"}
	
	
	==> etcd [c1c1db525792cee93534a1a2d8bc9e289bb8e44a3c18c1a2962887ee83192854] <==
	{"level":"warn","ts":"2025-12-10T08:37:45.289358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:37:45.326521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:37:45.348167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:37:45.405790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:37:45.445716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:37:45.489219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-10T08:37:45.648462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35208","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-10T08:38:43.032843Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-10T08:38:43.032898Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-767596","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-12-10T08:38:43.033023Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T08:38:43.037765Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-10T08:38:44.069384Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-10T08:38:44.069617Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T08:38:44.069795Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T08:38:44.069843Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-10T08:38:44.069692Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-10T08:38:44.069923Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-10T08:38:44.069973Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T08:38:44.069707Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-12-10T08:38:44.070068Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-10T08:38:44.070102Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-10T08:38:44.078624Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-12-10T08:38:44.078785Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-10T08:38:44.078858Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-12-10T08:38:44.078898Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-767596","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 08:40:50 up  3:23,  0 user,  load average: 3.02, 2.34, 1.88
	Linux pause-767596 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4d1d5992cc4a7b009e9355692324b93c4073f0c07db670f0d6fb5415eb94e688] <==
	I1210 08:40:23.304467       1 main.go:148] setting mtu 1500 for CNI 
	I1210 08:40:23.304479       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 08:40:23.304492       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T08:40:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 08:40:23.532747       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 08:40:23.533063       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 08:40:23.533119       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 08:40:23.534075       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1210 08:40:23.534358       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1210 08:40:23.537567       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1210 08:40:23.537734       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1210 08:40:23.537882       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1210 08:40:24.511110       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1210 08:40:24.598240       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1210 08:40:24.935954       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1210 08:40:24.966764       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1210 08:40:26.155280       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1210 08:40:26.659535       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1210 08:40:26.674234       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1210 08:40:27.114735       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1210 08:40:34.735308       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 08:40:34.735418       1 metrics.go:72] Registering metrics
	I1210 08:40:34.735541       1 controller.go:711] "Syncing nftables rules"
	I1210 08:40:43.532335       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 08:40:43.532439       1 main.go:301] handling current node
	
	
	==> kindnet [794b94a9f483ad2f1228b1f4bcff462c3ed700eb0451dc2e54464daa7fb66455] <==
	
	
	==> kindnet [b9c23cab83e3933a35721d41b4a1c9d16c345dc93c9e4f506985082ad0b659b0] <==
	I1210 08:37:56.904023       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1210 08:37:56.996317       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1210 08:37:56.996464       1 main.go:148] setting mtu 1500 for CNI 
	I1210 08:37:56.996477       1 main.go:178] kindnetd IP family: "ipv4"
	I1210 08:37:56.996491       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-10T08:37:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1210 08:37:57.198735       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1210 08:37:57.201196       1 controller.go:381] "Waiting for informer caches to sync"
	I1210 08:37:57.201254       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1210 08:37:57.201773       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1210 08:38:27.199572       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1210 08:38:27.202044       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1210 08:38:27.202227       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1210 08:38:27.203457       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1210 08:38:28.601692       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1210 08:38:28.601793       1 metrics.go:72] Registering metrics
	I1210 08:38:28.601878       1 controller.go:711] "Syncing nftables rules"
	I1210 08:38:37.198532       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1210 08:38:37.198659       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0cbf424c207d46886a3438d2ac328e0162fc287f9259049da31f809885fae9ca] <==
	I1210 08:40:34.631168       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1210 08:40:34.631229       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1210 08:40:34.634686       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1210 08:40:34.634702       1 policy_source.go:240] refreshing policies
	I1210 08:40:34.656353       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 08:40:34.658530       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1210 08:40:34.669587       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1210 08:40:34.669676       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1210 08:40:34.669957       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1210 08:40:34.670077       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 08:40:34.670999       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1210 08:40:34.675161       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1210 08:40:34.675529       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1210 08:40:34.675652       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1210 08:40:34.676334       1 aggregator.go:171] initial CRD sync complete...
	I1210 08:40:34.676387       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 08:40:34.676417       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 08:40:34.676444       1 cache.go:39] Caches are synced for autoregister controller
	E1210 08:40:34.747351       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 08:40:35.185902       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 08:40:36.512781       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1210 08:40:37.965731       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 08:40:38.133362       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1210 08:40:38.182287       1 controller.go:667] quota admission added evaluator for: endpoints
	I1210 08:40:38.259165       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [d799933dbf1f73062115abbf6e2ba939f1de6386c5fd7cc217bae3a3989ffcf5] <==
	W1210 08:38:43.058121       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058170       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058219       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058270       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058322       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058378       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058430       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058482       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058533       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058586       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058639       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058690       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058750       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058803       1 logging.go:55] [core] [Channel #17 SubChannel #21]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058854       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.058909       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.060136       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.060199       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.060249       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.060299       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.060348       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.060394       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.060445       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.060493       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 08:38:43.060794       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [d7674cb30a04349c5eac365914ffd12f57c54b30f34067fb8aaca29e1c1f8db6] <==
	I1210 08:40:37.906458       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 08:40:37.906503       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1210 08:40:37.915004       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 08:40:37.915180       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1210 08:40:37.916709       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1210 08:40:37.917086       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1210 08:40:37.925596       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 08:40:37.925638       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1210 08:40:37.925761       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1210 08:40:37.925838       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1210 08:40:37.925927       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1210 08:40:37.925956       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1210 08:40:37.931117       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1210 08:40:37.931217       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 08:40:37.931984       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1210 08:40:37.945501       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 08:40:37.946148       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1210 08:40:37.951388       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1210 08:40:37.951515       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1210 08:40:37.951601       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-767596"
	I1210 08:40:37.951737       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1210 08:40:37.960181       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 08:40:37.997676       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 08:40:37.997726       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 08:40:37.997738       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [f04f090d9d5b769dca360fb021e4a3baab548c08d62a34df27003779c331b1d6] <==
	I1210 08:37:55.054147       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1210 08:37:55.054153       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1210 08:37:55.054670       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1210 08:37:55.054702       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1210 08:37:55.054755       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1210 08:37:55.055207       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1210 08:37:55.057209       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1210 08:37:55.057294       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1210 08:37:55.057612       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1210 08:37:55.057813       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1210 08:37:55.058310       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1210 08:37:55.077878       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-767596" podCIDRs=["10.244.0.0/24"]
	I1210 08:37:55.090029       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1210 08:37:55.090131       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1210 08:37:55.090161       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 08:37:55.092621       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1210 08:37:55.092802       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1210 08:37:55.093233       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-767596"
	I1210 08:37:55.094781       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1210 08:37:55.094899       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1210 08:37:55.095363       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1210 08:37:55.095426       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1210 08:37:55.096329       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1210 08:37:55.096411       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1210 08:38:40.103143       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [022741f8c7a581c5c492c4ae38c1775fb76122b7b80fa353571b73866aeabdcb] <==
	I1210 08:37:56.920456       1 server_linux.go:53] "Using iptables proxy"
	I1210 08:37:57.024650       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 08:37:57.130446       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 08:37:57.130486       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1210 08:37:57.130563       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 08:37:57.239706       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 08:37:57.239768       1 server_linux.go:132] "Using iptables Proxier"
	I1210 08:37:57.244050       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 08:37:57.244372       1 server.go:527] "Version info" version="v1.34.2"
	I1210 08:37:57.244396       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 08:37:57.245912       1 config.go:200] "Starting service config controller"
	I1210 08:37:57.245936       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 08:37:57.245955       1 config.go:106] "Starting endpoint slice config controller"
	I1210 08:37:57.245959       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 08:37:57.245979       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 08:37:57.245984       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 08:37:57.246679       1 config.go:309] "Starting node config controller"
	I1210 08:37:57.246700       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 08:37:57.246707       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 08:37:57.346772       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 08:37:57.346966       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1210 08:37:57.347235       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [23d088000dc21df1eea635b974d19360a5e49f3f60043678d31c51ca0026677f] <==
	I1210 08:40:35.306584       1 server_linux.go:53] "Using iptables proxy"
	I1210 08:40:35.667232       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1210 08:40:35.903097       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1210 08:40:35.903136       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1210 08:40:35.903201       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 08:40:36.126495       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1210 08:40:36.126556       1 server_linux.go:132] "Using iptables Proxier"
	I1210 08:40:36.133220       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 08:40:36.133514       1 server.go:527] "Version info" version="v1.34.2"
	I1210 08:40:36.133530       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 08:40:36.134951       1 config.go:200] "Starting service config controller"
	I1210 08:40:36.134964       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1210 08:40:36.151366       1 config.go:106] "Starting endpoint slice config controller"
	I1210 08:40:36.151396       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1210 08:40:36.151425       1 config.go:403] "Starting serviceCIDR config controller"
	I1210 08:40:36.151429       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1210 08:40:36.152121       1 config.go:309] "Starting node config controller"
	I1210 08:40:36.152128       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1210 08:40:36.152134       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1210 08:40:36.238902       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1210 08:40:36.251557       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1210 08:40:36.251673       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [81209469877d513d99fb101d1a10c2ce3527b803a1a2632345426450f209d461] <==
	I1210 08:37:48.043502       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1210 08:37:48.069814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1210 08:37:48.069984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1210 08:37:48.070483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1210 08:37:48.078358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1210 08:37:48.079050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1210 08:37:48.079285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1210 08:37:48.084361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1210 08:37:48.084994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1210 08:37:48.085413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1210 08:37:48.086815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1210 08:37:48.087490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1210 08:37:48.090642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1210 08:37:48.090925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1210 08:37:48.091105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1210 08:37:48.091211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1210 08:37:48.091450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1210 08:37:48.091570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1210 08:37:48.091717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1210 08:37:49.637310       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 08:38:43.031399       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1210 08:38:43.031422       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1210 08:38:43.031455       1 server.go:265] "[graceful-termination] secure server is exiting"
	I1210 08:38:43.031456       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1210 08:38:43.031469       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d6c7b4fc3031146f4408f3b9fc1855ea7b8c17881d51899d60b47dc3ac1095ad] <==
	I1210 08:40:32.750709       1 serving.go:386] Generated self-signed cert in-memory
	I1210 08:40:36.797598       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1210 08:40:36.797631       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 08:40:36.815541       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1210 08:40:36.815648       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1210 08:40:36.815675       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1210 08:40:36.815707       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1210 08:40:36.824921       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 08:40:36.824958       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 08:40:36.824977       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 08:40:36.824984       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 08:40:36.916716       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1210 08:40:36.925085       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1210 08:40:36.925129       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 10 08:40:27 pause-767596 kubelet[1328]: E1210 08:40:27.869006    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p4g2s\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6a1da39c-5fd9-47dc-8874-9dc207729443" pod="kube-system/kube-proxy-p4g2s"
	Dec 10 08:40:27 pause-767596 kubelet[1328]: E1210 08:40:27.869408    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-7r54s\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="621f03c5-af3a-4a79-9ee4-b28c576f8a3a" pod="kube-system/coredns-66bc5c9577-7r54s"
	Dec 10 08:40:27 pause-767596 kubelet[1328]: E1210 08:40:27.869789    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-767596\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b938f46528c3209faec4a21e53aef9ab" pod="kube-system/kube-apiserver-pause-767596"
	Dec 10 08:40:27 pause-767596 kubelet[1328]: E1210 08:40:27.871216    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-767596\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="9207bccb5f86e0457a4555234e6ce912" pod="kube-system/kube-controller-manager-pause-767596"
	Dec 10 08:40:27 pause-767596 kubelet[1328]: E1210 08:40:27.871623    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-767596\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="44ca7fad4b77c4449373efd3a5c7f1c5" pod="kube-system/kube-scheduler-pause-767596"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.071930    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-767596\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b938f46528c3209faec4a21e53aef9ab" pod="kube-system/kube-apiserver-pause-767596"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.073125    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-767596\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="9207bccb5f86e0457a4555234e6ce912" pod="kube-system/kube-controller-manager-pause-767596"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.073538    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-767596\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="44ca7fad4b77c4449373efd3a5c7f1c5" pod="kube-system/kube-scheduler-pause-767596"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.074168    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-767596\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="9b51ec6fb7597bdd245e6184e27911f1" pod="kube-system/etcd-pause-767596"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.074387    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-kx2vt\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="475678b0-6300-4037-8f4c-dbc6a7b12cb7" pod="kube-system/kindnet-kx2vt"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.074537    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p4g2s\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6a1da39c-5fd9-47dc-8874-9dc207729443" pod="kube-system/kube-proxy-p4g2s"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.074693    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-7r54s\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="621f03c5-af3a-4a79-9ee4-b28c576f8a3a" pod="kube-system/coredns-66bc5c9577-7r54s"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.077236    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-767596\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="9207bccb5f86e0457a4555234e6ce912" pod="kube-system/kube-controller-manager-pause-767596"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.077773    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-767596\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="44ca7fad4b77c4449373efd3a5c7f1c5" pod="kube-system/kube-scheduler-pause-767596"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.078141    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-767596\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="9b51ec6fb7597bdd245e6184e27911f1" pod="kube-system/etcd-pause-767596"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.078494    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-kx2vt\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="475678b0-6300-4037-8f4c-dbc6a7b12cb7" pod="kube-system/kindnet-kx2vt"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.078861    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p4g2s\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="6a1da39c-5fd9-47dc-8874-9dc207729443" pod="kube-system/kube-proxy-p4g2s"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.079589    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-7r54s\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="621f03c5-af3a-4a79-9ee4-b28c576f8a3a" pod="kube-system/coredns-66bc5c9577-7r54s"
	Dec 10 08:40:28 pause-767596 kubelet[1328]: E1210 08:40:28.080190    1328 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-767596\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b938f46528c3209faec4a21e53aef9ab" pod="kube-system/kube-apiserver-pause-767596"
	Dec 10 08:40:34 pause-767596 kubelet[1328]: E1210 08:40:34.383960    1328 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-767596\" is forbidden: User \"system:node:pause-767596\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-767596' and this object" podUID="44ca7fad4b77c4449373efd3a5c7f1c5" pod="kube-system/kube-scheduler-pause-767596"
	Dec 10 08:40:34 pause-767596 kubelet[1328]: E1210 08:40:34.487211    1328 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-767596\" is forbidden: User \"system:node:pause-767596\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-767596' and this object" podUID="9b51ec6fb7597bdd245e6184e27911f1" pod="kube-system/etcd-pause-767596"
	Dec 10 08:40:34 pause-767596 kubelet[1328]: E1210 08:40:34.562579    1328 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-kx2vt\" is forbidden: User \"system:node:pause-767596\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-767596' and this object" podUID="475678b0-6300-4037-8f4c-dbc6a7b12cb7" pod="kube-system/kindnet-kx2vt"
	Dec 10 08:40:44 pause-767596 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 10 08:40:44 pause-767596 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 10 08:40:44 pause-767596 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-767596 -n pause-767596
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-767596 -n pause-767596: exit status 2 (371.998853ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-767596 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (7200.062s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 09:22:09.142853  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/default-k8s-diff-port-229213/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 09:22:27.800645  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
E1210 09:23:01.623798  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.85.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.85.2:8443: connect: connection refused
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (32m45s)
		TestNetworkPlugins/group/enable-default-cni (57s)
		TestNetworkPlugins/group/enable-default-cni/Start (57s)
		TestStartStop (33m26s)
		TestStartStop/group/no-preload (28m7s)
		TestStartStop/group/no-preload/serial (28m7s)
		TestStartStop/group/no-preload/serial/AddonExistsAfterStop (2m56s)

                                                
                                                
goroutine 6122 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2682 +0x2b0
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x38

                                                
                                                
goroutine 1 [chan receive, 28 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4000103a40, 0x40012d1bb8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
testing.runTests(0x400000e048, {0x534c680, 0x2c, 0x2c}, {0x40012d1d08?, 0x125774?, 0x5375080?})
	/usr/local/go/src/testing/testing.go:2475 +0x3b8
testing.(*M).Run(0x40002d6a00)
	/usr/local/go/src/testing/testing.go:2337 +0x530
k8s.io/minikube/test/integration.TestMain(0x40002d6a00)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xf0
main.main()
	_testmain.go:133 +0x88

                                                
                                                
goroutine 180 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4001370590, 0x2d)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001370580)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001358c60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4000083340?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40001060e0?}, 0x22ee5c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40001060e0}, 0x40012e9f38, {0x369e520, 0x4004f24720}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40000a5fa8?, {0x369e520?, 0x4004f24720?}, 0xa8?, 0x40003911f0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4004f10500, 0x3b9aca00, 0x0, 0x1, 0x40001060e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 171
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3652 [chan receive, 33 minutes]:
testing.(*testState).waitParallel(0x4000070230)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001388700)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001388700)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001388700)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001388700, 0x40014dc780)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3508
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4053 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4052
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1702 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x4001c956c0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1701
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1191 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x4001950c00, 0x4001875a40)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1190
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5558 [chan receive, 3 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40017ffec0, 0x40001060e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5553
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5557 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x40003c6300?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5553
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 181 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40001060e0}, 0x400051af40, 0x400010ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40001060e0}, 0x0?, 0x400051af40, 0x400051af88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40001060e0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000174480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 171
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 170 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 160
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 681 [IO wait, 113 minutes]:
internal/poll.runtime_pollWait(0xffff54790c00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40014dc200?, 0x2d970?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x40014dc200)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x40014dc200)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40002be800)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40002be800)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x4000153600, {0x36d4000, 0x40002be800})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x4000153600)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 679
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 3508 [chan receive, 2 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4001c94000, 0x4001580210)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3209
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 182 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 181
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 171 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001358c60, 0x40001060e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 160
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1714 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40001060e0}, 0x4001977f40, 0x4001507f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40001060e0}, 0xd8?, 0x4001977f40, 0x4001977f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40001060e0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000175080?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1703
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 976 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0x40002bfbd0, 0x2b)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40002bfbc0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40016bede0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40003915e0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40001060e0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40001060e0}, 0x4001427f38, {0x369e520, 0x40015480f0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x40015480f0?}, 0x10?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001f1f390, 0x3b9aca00, 0x0, 0x1, 0x40001060e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 990
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5217 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5184
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1202 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x40019ca480, 0x40018bda40)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 794
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 1145 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x40018ae600, 0x40018bc460)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1144
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5571 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36e65a8, 0x4001375a40}, {0x36d4660, 0x40014879e0}, 0x1, 0x0, 0x4000473b00)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/loop.go:66 +0x158
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36e6618?, 0x40003da000?}, 0x3b9aca00, 0x400133bd28?, 0x1, 0x400133bb00)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:48 +0x8c
k8s.io/minikube/test/integration.PodWait({0x36e6618, 0x40003da000}, 0x40007a76c0, {0x4001b7c018, 0x11}, {0x29941e1, 0x14}, {0x29ac150, 0x1c}, 0x7dba821800)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:380 +0x22c
k8s.io/minikube/test/integration.validateAddonAfterStop({0x36e6618, 0x40003da000}, 0x40007a76c0, {0x4001b7c018, 0x11}, {0x29786f9?, 0x20c8769200161e84?}, {0x69393b98?, 0x400010cf58?}, {0x161f08?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:285 +0xd4
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x40007a76c0?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x40007a76c0, 0x40015fe080)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3877
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3757 [chan receive, 31 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40014af740, 0x40001060e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3765
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3500 [chan receive, 28 minutes]:
testing.(*T).Run(0x40013496c0, {0x296eb91?, 0x0?}, 0x40014dc280)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x40013496c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x40013496c0, 0x400183a100)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3496
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5180 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001358720, 0x40001060e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5178
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3209 [chan receive, 33 minutes]:
testing.(*T).Run(0x4001c94380, {0x296d71f?, 0xaad5f429d49?}, 0x4001580210)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins(0x4001c94380)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xe4
testing.tRunner(0x4001c94380, 0x339baf0)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4032 [chan receive, 25 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001817860, 0x40001060e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4027
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4282 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40001060e0}, 0x400130af40, 0x400130af88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40001060e0}, 0x58?, 0x400130af40, 0x400130af88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40001060e0?}, 0x4000316540?, 0x40006a0af0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40003c6f00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4270
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5836 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x40017f1dc0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5835
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1304 [IO wait, 109 minutes]:
internal/poll.runtime_pollWait(0xffff54790400, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001c20000?, 0xdbd0c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x4001c20000)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x4001c20000)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x4001b0abc0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x4001b0abc0)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x400187eb00, {0x36d4000, 0x4001b0abc0})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x400187eb00)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1302
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 3756 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x4000174780?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3765
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3257 [chan receive, 33 minutes]:
testing.(*T).Run(0x4001c948c0, {0x296d71f?, 0x4001416f58?}, 0x339bd20)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop(0x4001c948c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x3c
testing.tRunner(0x4001c948c0, 0x339bb38)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4281 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40006d8850, 0x2)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40006d8840)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40014af140)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400051ae88?, 0x2a0ac?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40001060e0?}, 0xffff9b2565c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40001060e0}, 0x40012cff38, {0x369e520, 0x4000497020}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x4000497020?}, 0xc0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001b34060, 0x3b9aca00, 0x0, 0x1, 0x40001060e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4270
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5184 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40001060e0}, 0x400051ef40, 0x400051ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40001060e0}, 0xd9?, 0x400051ef40, 0x400051ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40001060e0?}, 0x0?, 0x400051ef50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x4000224080?, 0x40012f8e00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5180
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3777 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40001060e0}, 0x4001979f40, 0x4001503f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40001060e0}, 0xc0?, 0x4001979f40, 0x4001979f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40001060e0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40003c6c00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3757
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4283 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4282
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 6008 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0x40019afb00, 0x4001737810)
	/usr/local/go/src/os/exec/exec.go:789 +0x70
created by os/exec.(*Cmd).Start in goroutine 6005
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 2093 [chan send, 79 minutes]:
os/exec.(*Cmd).watchCtx(0x4001594180, 0x40014321c0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1478
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5837 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40017fe300, 0x40001060e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5835
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5179 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x40012f8e00?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5178
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3651 [chan receive, 33 minutes]:
testing.(*testState).waitParallel(0x4000070230)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001388380)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001388380)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001388380)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001388380, 0x40014dc700)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3508
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4052 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40001060e0}, 0x4001978740, 0x4001978788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40001060e0}, 0x4c?, 0x4001978740, 0x4001978788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40001060e0?}, 0x0?, 0x4001978750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x4000224080?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4032
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5544 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5543
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 6006 [IO wait]:
internal/poll.runtime_pollWait(0xffff54790a00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001a52cc0?, 0x4001aecb76?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001a52cc0, {0x4001aecb76, 0x48a, 0x48a})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x4001bd11a8, {0x4001aecb76?, 0x4001a85548?, 0xcc7cc?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x40012e3710, {0x369c8e8, 0x40000a7e00})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cae0, 0x40012e3710}, {0x369c8e8, 0x40000a7e00}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x4001bd11a8?, {0x369cae0, 0x40012e3710})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x4001bd11a8, {0x369cae0, 0x40012e3710})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cae0, 0x40012e3710}, {0x369c968, 0x4001bd11a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x40014acc40?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 6005
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 4269 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x4001479180?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4268
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3580 [chan receive, 33 minutes]:
testing.(*testState).waitParallel(0x4000070230)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001c95500)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001c95500)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001c95500)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001c95500, 0x4001864800)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3508
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1715 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1714
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4051 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x400183b190, 0x15)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400183b180)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001817860)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40014bad20?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40001060e0?}, 0x400130cef8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40001060e0}, 0x4001412f38, {0x369e520, 0x40017ea1b0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x369e520?, 0x40017ea1b0?}, 0x0?, 0x4000667040?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4004f11db0, 0x3b9aca00, 0x0, 0x1, 0x40001060e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4032
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3877 [chan receive, 2 minutes]:
testing.(*T).Run(0x4001389c00, {0x2994231?, 0x40000006ee?}, 0x40015fe080)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x4001389c00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x4001389c00, 0x40014dc280)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3500
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3760 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0x400183a9d0, 0x16)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400183a9c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40014af740)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001432690?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40001060e0?}, 0x40019746a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40001060e0}, 0x40012ebf38, {0x369e520, 0x40016cff50}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40019747a8?, {0x369e520?, 0x40016cff50?}, 0x20?, 0x36e6618?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4000bdfc90, 0x3b9aca00, 0x0, 0x1, 0x40001060e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3757
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3778 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3777
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 990 [chan receive, 109 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40016bede0, 0x40001060e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 988
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 6005 [syscall, 2 minutes]:
syscall.Syscall6(0x5f, 0x3, 0x11, 0x4001709c38, 0x4, 0x40016afd40, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x4001709d98?, 0x1929a0?, 0xffffc7d531a1?, 0x0?, 0x40019bed80?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x4001b0b580)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0x4001709d68?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x40019afb00)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x40019afb00)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
k8s.io/minikube/test/integration.Run(0x40014acc40, 0x40019afb00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:104 +0x154
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0x40014acc40)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x44
testing.tRunner(0x40014acc40, 0x40012e35f0)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3581
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 989 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x40003c6a80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 988
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 993 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40001060e0}, 0x400130a740, 0x4000109f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40001060e0}, 0x58?, 0x400130a740, 0x400130a788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40001060e0?}, 0x333a373020303132?, 0x3535302e33333a33?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001430480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 990
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 994 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 993
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1713 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40002bef10, 0x24)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40002bef00)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40016be900)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40002f6e00?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40001060e0?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40001060e0}, 0x40015e2f38, {0x369e520, 0x40007818c0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x40007818c0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40014a62c0, 0x3b9aca00, 0x0, 0x1, 0x40001060e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1703
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5183 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0x4001969510, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001969500)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001358720)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4000082a10?, 0x40002f7040?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40001060e0?}, 0x40002f7120?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40001060e0}, 0x40015dff38, {0x369e520, 0x400200c2d0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40002f7030?, {0x369e520?, 0x400200c2d0?}, 0x10?, 0x4001332900?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40015977c0, 0x3b9aca00, 0x0, 0x1, 0x40001060e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5180
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4031 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000224080?}, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4027
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5818 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4001371b10, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001371b00)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40017fe300)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400051ce88?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40001060e0?}, 0x400051cea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40001060e0}, 0x400142af38, {0x369e520, 0x4001be1e60}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e520?, 0x4001be1e60?}, 0x0?, 0x36e6618?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4004f11ac0, 0x3b9aca00, 0x0, 0x1, 0x40001060e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5837
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3496 [chan receive, 5 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4001348a80, 0x339bd20)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3257
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3581 [chan receive, 2 minutes]:
testing.(*T).Run(0x4001c956c0, {0x296d724?, 0x368adf0?}, 0x40012e35f0)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001c956c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x4f4
testing.tRunner(0x4001c956c0, 0x4001864880)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3508
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 6007 [IO wait]:
internal/poll.runtime_pollWait(0xffff54790e00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001a52d80?, 0x4001c7f595?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001a52d80, {0x4001c7f595, 0x6a6b, 0x6a6b})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x4001bd11c0, {0x4001c7f595?, 0x4001978d48?, 0xcc7cc?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x40012e3740, {0x369c8e8, 0x40000a7e08})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cae0, 0x40012e3740}, {0x369c8e8, 0x40000a7e08}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x4001bd11c0?, {0x369cae0, 0x40012e3740})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x4001bd11c0, {0x369cae0, 0x40012e3740})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cae0, 0x40012e3740}, {0x369c968, 0x4001bd11c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x40017f0700?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 6005
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 2042 [chan send, 79 minutes]:
os/exec.(*Cmd).watchCtx(0x4000174780, 0x4001602460)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 2041
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 1703 [chan receive, 81 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40016be900, 0x40001060e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1701
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5820 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5819
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1281 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0x400182f9e0)
	/usr/local/go/src/net/http/transport.go:2600 +0x94
created by net/http.(*Transport).dialConn in goroutine 1198
	/usr/local/go/src/net/http/transport.go:1948 +0x1164

                                                
                                                
goroutine 1200 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0x400182f9e0)
	/usr/local/go/src/net/http/transport.go:2398 +0xa6c
created by net/http.(*Transport).dialConn in goroutine 1198
	/usr/local/go/src/net/http/transport.go:1947 +0x111c

                                                
                                                
goroutine 2075 [chan send, 79 minutes]:
os/exec.(*Cmd).watchCtx(0x4000175200, 0x40016032d0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 2074
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5542 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x40017fb310, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40017fb300)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40017ffec0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40017d9ea0?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x40001060e0?}, 0x400051c6a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x40001060e0}, 0x4001417f38, {0x369e520, 0x4001826660}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e520?, 0x4001826660?}, 0x50?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40012db7c0, 0x3b9aca00, 0x0, 0x1, 0x40001060e0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5558
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 5819 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40001060e0}, 0x4001972740, 0x4001972788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40001060e0}, 0x0?, 0x4001972740, 0x4001972788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40001060e0?}, 0x36e6618?, 0x4000083f10?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4000083e30?, 0x0?, 0x40003c6780?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5837
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4270 [chan receive, 11 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40014af140, 0x40001060e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4268
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5543 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x40001060e0}, 0x4001a84f40, 0x4001a84f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x40001060e0}, 0x18?, 0x4001a84f40, 0x4001a84f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x40001060e0?}, 0x0?, 0x95c64?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000175980?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5558
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                    

Test pass (240/316)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 10.77
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.31
9 TestDownloadOnly/v1.28.0/DeleteAll 0.38
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.22
12 TestDownloadOnly/v1.34.2/json-events 4.08
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.09
18 TestDownloadOnly/v1.34.2/DeleteAll 0.22
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.88
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.09
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.35
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.62
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 159.76
40 TestAddons/serial/GCPAuth/Namespaces 0.22
41 TestAddons/serial/GCPAuth/FakeCredentials 8.87
57 TestAddons/StoppedEnableDisable 12.41
58 TestCertOptions 36.12
59 TestCertExpiration 236.82
61 TestForceSystemdFlag 41.45
62 TestForceSystemdEnv 33.62
67 TestErrorSpam/setup 32.89
68 TestErrorSpam/start 0.92
69 TestErrorSpam/status 1.09
70 TestErrorSpam/pause 6.53
71 TestErrorSpam/unpause 5.36
72 TestErrorSpam/stop 1.57
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 79.25
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 19.28
79 TestFunctional/serial/KubeContext 0.09
80 TestFunctional/serial/KubectlGetPods 0.15
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.51
84 TestFunctional/serial/CacheCmd/cache/add_local 1.31
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.78
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
92 TestFunctional/serial/ExtraConfig 58.14
93 TestFunctional/serial/ComponentHealth 0.11
94 TestFunctional/serial/LogsCmd 1.47
95 TestFunctional/serial/LogsFileCmd 1.46
96 TestFunctional/serial/InvalidService 4.25
98 TestFunctional/parallel/ConfigCmd 0.48
99 TestFunctional/parallel/DashboardCmd 13.26
100 TestFunctional/parallel/DryRun 0.47
101 TestFunctional/parallel/InternationalLanguage 0.23
102 TestFunctional/parallel/StatusCmd 1.1
106 TestFunctional/parallel/ServiceCmdConnect 6.63
107 TestFunctional/parallel/AddonsCmd 0.14
108 TestFunctional/parallel/PersistentVolumeClaim 20.63
110 TestFunctional/parallel/SSHCmd 0.69
111 TestFunctional/parallel/CpCmd 2.01
113 TestFunctional/parallel/FileSync 0.36
114 TestFunctional/parallel/CertSync 2.58
118 TestFunctional/parallel/NodeLabels 0.1
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.72
122 TestFunctional/parallel/License 0.34
123 TestFunctional/parallel/Version/short 0.08
124 TestFunctional/parallel/Version/components 1.03
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.54
129 TestFunctional/parallel/ImageCommands/ImageBuild 3.97
130 TestFunctional/parallel/ImageCommands/Setup 0.68
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.25
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.73
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.99
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.32
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.58
138 TestFunctional/parallel/ProfileCmd/profile_list 0.56
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.71
143 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.1
145 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.47
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.45
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
155 TestFunctional/parallel/ServiceCmd/DeployApp 7.21
156 TestFunctional/parallel/MountCmd/any-port 8.29
157 TestFunctional/parallel/ServiceCmd/List 0.57
158 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
159 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
160 TestFunctional/parallel/ServiceCmd/Format 0.41
161 TestFunctional/parallel/ServiceCmd/URL 0.38
162 TestFunctional/parallel/MountCmd/specific-port 2.04
163 TestFunctional/parallel/MountCmd/VerifyCleanup 1.95
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.44
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.08
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.05
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.05
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.34
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.79
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.49
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.44
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.21
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.14
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 1.14
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.09
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.36
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 2.11
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.69
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.41
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.48
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.24
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.23
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.22
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.21
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.83
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.31
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.58
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.98
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.46
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.14
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.16
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.52
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.61
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.94
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.49
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.4
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.65
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.41
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.83
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.93
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 195.4
265 TestMultiControlPlane/serial/DeployApp 7.61
266 TestMultiControlPlane/serial/PingHostFromPods 1.57
267 TestMultiControlPlane/serial/AddWorkerNode 59.26
268 TestMultiControlPlane/serial/NodeLabels 0.11
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.01
270 TestMultiControlPlane/serial/CopyFile 19.86
271 TestMultiControlPlane/serial/StopSecondaryNode 12.87
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
273 TestMultiControlPlane/serial/RestartSecondaryNode 28.55
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.33
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 133.22
276 TestMultiControlPlane/serial/DeleteSecondaryNode 12.27
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.84
278 TestMultiControlPlane/serial/StopCluster 36.03
279 TestMultiControlPlane/serial/RestartCluster 161.58
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
281 TestMultiControlPlane/serial/AddSecondaryNode 81.83
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.04
287 TestJSONOutput/start/Command 81.4
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.83
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.25
312 TestKicCustomNetwork/create_custom_network 38.49
313 TestKicCustomNetwork/use_default_bridge_network 34
314 TestKicExistingNetwork 36.11
315 TestKicCustomSubnet 35.22
316 TestKicStaticIP 38.03
317 TestMainNoArgs 0.06
318 TestMinikubeProfile 72.46
321 TestMountStart/serial/StartWithMountFirst 8.52
322 TestMountStart/serial/VerifyMountFirst 0.27
323 TestMountStart/serial/StartWithMountSecond 6.31
324 TestMountStart/serial/VerifyMountSecond 0.26
325 TestMountStart/serial/DeleteFirst 1.92
326 TestMountStart/serial/VerifyMountPostDelete 0.26
327 TestMountStart/serial/Stop 1.31
328 TestMountStart/serial/RestartStopped 8.14
329 TestMountStart/serial/VerifyMountPostStop 0.26
332 TestMultiNode/serial/FreshStart2Nodes 135.81
333 TestMultiNode/serial/DeployApp2Nodes 4.78
334 TestMultiNode/serial/PingHostFrom2Pods 0.93
335 TestMultiNode/serial/AddNode 57.69
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.68
338 TestMultiNode/serial/CopyFile 10.33
339 TestMultiNode/serial/StopNode 2.5
340 TestMultiNode/serial/StartAfterStop 8.12
341 TestMultiNode/serial/RestartKeepsNodes 78.24
342 TestMultiNode/serial/DeleteNode 5.68
343 TestMultiNode/serial/StopMultiNode 24
344 TestMultiNode/serial/RestartMultiNode 50.35
345 TestMultiNode/serial/ValidateNameConflict 33.87
350 TestPreload 117.52
352 TestScheduledStopUnix 110.91
355 TestInsufficientStorage 13.35
356 TestRunningBinaryUpgrade 299.44
359 TestMissingContainerUpgrade 96.95
362 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
363 TestPause/serial/Start 89.3
364 TestNoKubernetes/serial/StartWithK8s 39.36
365 TestNoKubernetes/serial/StartWithStopK8s 7.11
366 TestNoKubernetes/serial/Start 8.02
367 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
368 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
369 TestNoKubernetes/serial/ProfileList 1.09
370 TestNoKubernetes/serial/Stop 1.37
371 TestNoKubernetes/serial/StartNoArgs 6.85
372 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
373 TestPause/serial/SecondStartNoReconfiguration 123.45
375 TestStoppedBinaryUpgrade/Setup 0.9
376 TestStoppedBinaryUpgrade/Upgrade 300.96
377 TestStoppedBinaryUpgrade/MinikubeLogs 1.74
x
+
TestDownloadOnly/v1.28.0/json-events (10.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-941654 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-941654 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.765610624s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (10.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1210 07:24:35.530002  378528 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1210 07:24:35.530079  378528 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-941654
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-941654: exit status 85 (309.808833ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-941654 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-941654 │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:24:24
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:24:24.812238  378534 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:24:24.812436  378534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:24:24.812463  378534 out.go:374] Setting ErrFile to fd 2...
	I1210 07:24:24.812481  378534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:24:24.812771  378534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	W1210 07:24:24.812940  378534 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22089-376671/.minikube/config/config.json: open /home/jenkins/minikube-integration/22089-376671/.minikube/config/config.json: no such file or directory
	I1210 07:24:24.813434  378534 out.go:368] Setting JSON to true
	I1210 07:24:24.814285  378534 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7615,"bootTime":1765343850,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:24:24.814378  378534 start.go:143] virtualization:  
	I1210 07:24:24.819807  378534 out.go:99] [download-only-941654] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1210 07:24:24.820001  378534 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball: no such file or directory
	I1210 07:24:24.820126  378534 notify.go:221] Checking for updates...
	I1210 07:24:24.824260  378534 out.go:171] MINIKUBE_LOCATION=22089
	I1210 07:24:24.827639  378534 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:24:24.830988  378534 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:24:24.834227  378534 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 07:24:24.837451  378534 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1210 07:24:24.843593  378534 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 07:24:24.843873  378534 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:24:24.879678  378534 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:24:24.879788  378534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:24:24.941421  378534 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-10 07:24:24.931630424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:24:24.941529  378534 docker.go:319] overlay module found
	I1210 07:24:24.944788  378534 out.go:99] Using the docker driver based on user configuration
	I1210 07:24:24.944827  378534 start.go:309] selected driver: docker
	I1210 07:24:24.944834  378534 start.go:927] validating driver "docker" against <nil>
	I1210 07:24:24.944938  378534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:24:24.998609  378534 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-10 07:24:24.98877482 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:24:24.998792  378534 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:24:24.999183  378534 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1210 07:24:24.999359  378534 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 07:24:25.004636  378534 out.go:171] Using Docker driver with root privileges
	I1210 07:24:25.007892  378534 cni.go:84] Creating CNI manager for ""
	I1210 07:24:25.007983  378534 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1210 07:24:25.007993  378534 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 07:24:25.008110  378534 start.go:353] cluster config:
	{Name:download-only-941654 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-941654 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:24:25.011402  378534 out.go:99] Starting "download-only-941654" primary control-plane node in "download-only-941654" cluster
	I1210 07:24:25.011441  378534 cache.go:134] Beginning downloading kic base image for docker with crio
	I1210 07:24:25.014511  378534 out.go:99] Pulling base image v0.0.48-1765319469-22089 ...
	I1210 07:24:25.014589  378534 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 07:24:25.014682  378534 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local docker daemon
	I1210 07:24:25.029812  378534 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca to local cache
	I1210 07:24:25.030032  378534 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca in local cache directory
	I1210 07:24:25.030140  378534 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca to local cache
	I1210 07:24:25.069585  378534 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1210 07:24:25.069613  378534 cache.go:65] Caching tarball of preloaded images
	I1210 07:24:25.069823  378534 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 07:24:25.073277  378534 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1210 07:24:25.073313  378534 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1210 07:24:25.164949  378534 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1210 07:24:25.165085  378534 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1210 07:24:28.660548  378534 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1210 07:24:28.661088  378534 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/download-only-941654/config.json ...
	I1210 07:24:28.661149  378534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/download-only-941654/config.json: {Name:mkf2b30143bfea65cc69045bf369c0990fcfddaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 07:24:28.661398  378534 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1210 07:24:28.661661  378534 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22089-376671/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-941654 host does not exist
	  To start a cluster, run: "minikube start -p download-only-941654"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-941654
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (4.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-417315 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-417315 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.079033168s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (4.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1210 07:24:40.527790  378528 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1210 07:24:40.527824  378528 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-417315
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-417315: exit status 85 (92.289167ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-941654 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-941654 │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │ 10 Dec 25 07:24 UTC │
	│ delete  │ -p download-only-941654                                                                                                                                                   │ download-only-941654 │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │ 10 Dec 25 07:24 UTC │
	│ start   │ -o=json --download-only -p download-only-417315 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-417315 │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:24:36
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:24:36.497141  378730 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:24:36.497273  378730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:24:36.497279  378730 out.go:374] Setting ErrFile to fd 2...
	I1210 07:24:36.497284  378730 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:24:36.497745  378730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:24:36.498193  378730 out.go:368] Setting JSON to true
	I1210 07:24:36.499414  378730 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7627,"bootTime":1765343850,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:24:36.499498  378730 start.go:143] virtualization:  
	I1210 07:24:36.523100  378730 out.go:99] [download-only-417315] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:24:36.523481  378730 notify.go:221] Checking for updates...
	I1210 07:24:36.563070  378730 out.go:171] MINIKUBE_LOCATION=22089
	I1210 07:24:36.586359  378730 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:24:36.620057  378730 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:24:36.666530  378730 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 07:24:36.701301  378730 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1210 07:24:36.761627  378730 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 07:24:36.761955  378730 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:24:36.785563  378730 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:24:36.785685  378730 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:24:36.844267  378730 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-10 07:24:36.835058788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:24:36.844375  378730 docker.go:319] overlay module found
	I1210 07:24:36.887146  378730 out.go:99] Using the docker driver based on user configuration
	I1210 07:24:36.887196  378730 start.go:309] selected driver: docker
	I1210 07:24:36.887204  378730 start.go:927] validating driver "docker" against <nil>
	I1210 07:24:36.887327  378730 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:24:36.942468  378730 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-10 07:24:36.93303593 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:24:36.942646  378730 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:24:36.942933  378730 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1210 07:24:36.943123  378730 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 07:24:36.981447  378730 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-417315 host does not exist
	  To start a cluster, run: "minikube start -p download-only-417315"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-417315
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-121667 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-121667 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.883328028s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1210 07:24:44.874791  378528 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1210 07:24:44.874828  378528 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22089-376671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-121667
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-121667: exit status 85 (89.659409ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-941654 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-941654 │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │ 10 Dec 25 07:24 UTC │
	│ delete  │ -p download-only-941654                                                                                                                                                          │ download-only-941654 │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │ 10 Dec 25 07:24 UTC │
	│ start   │ -o=json --download-only -p download-only-417315 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-417315 │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │ 10 Dec 25 07:24 UTC │
	│ delete  │ -p download-only-417315                                                                                                                                                          │ download-only-417315 │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │ 10 Dec 25 07:24 UTC │
	│ start   │ -o=json --download-only -p download-only-121667 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-121667 │ jenkins │ v1.37.0 │ 10 Dec 25 07:24 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/10 07:24:41
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 07:24:41.037426  378930 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:24:41.037541  378930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:24:41.037553  378930 out.go:374] Setting ErrFile to fd 2...
	I1210 07:24:41.037558  378930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:24:41.037799  378930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:24:41.038231  378930 out.go:368] Setting JSON to true
	I1210 07:24:41.039064  378930 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7631,"bootTime":1765343850,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:24:41.039133  378930 start.go:143] virtualization:  
	I1210 07:24:41.042535  378930 out.go:99] [download-only-121667] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:24:41.042816  378930 notify.go:221] Checking for updates...
	I1210 07:24:41.046369  378930 out.go:171] MINIKUBE_LOCATION=22089
	I1210 07:24:41.049790  378930 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:24:41.052787  378930 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:24:41.055860  378930 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 07:24:41.058818  378930 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1210 07:24:41.064533  378930 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1210 07:24:41.064823  378930 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:24:41.094730  378930 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:24:41.094856  378930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:24:41.149867  378930 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-10 07:24:41.140543628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:24:41.149972  378930 docker.go:319] overlay module found
	I1210 07:24:41.152977  378930 out.go:99] Using the docker driver based on user configuration
	I1210 07:24:41.153025  378930 start.go:309] selected driver: docker
	I1210 07:24:41.153038  378930 start.go:927] validating driver "docker" against <nil>
	I1210 07:24:41.153144  378930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:24:41.206186  378930 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-10 07:24:41.197621184 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:24:41.206346  378930 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1210 07:24:41.206606  378930 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1210 07:24:41.206758  378930 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 07:24:41.209855  378930 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-121667 host does not exist
	  To start a cluster, run: "minikube start -p download-only-121667"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-121667
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1210 07:24:46.311305  378528 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-728513 --alsologtostderr --binary-mirror http://127.0.0.1:37169 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-728513" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-728513
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-054300
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-054300: exit status 85 (70.909782ms)

                                                
                                                
-- stdout --
	* Profile "addons-054300" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-054300"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-054300
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-054300: exit status 85 (79.597193ms)

                                                
                                                
-- stdout --
	* Profile "addons-054300" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-054300"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (159.76s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-054300 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-054300 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m39.760878113s)
--- PASS: TestAddons/Setup (159.76s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-054300 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-054300 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.87s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-054300 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-054300 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a5aaf232-0921-4cfd-ae90-dd58a577316e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [a5aaf232-0921-4cfd-ae90-dd58a577316e] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.0035777s
addons_test.go:696: (dbg) Run:  kubectl --context addons-054300 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-054300 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-054300 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-054300 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.87s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-054300
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-054300: (12.134118523s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-054300
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-054300
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-054300
--- PASS: TestAddons/StoppedEnableDisable (12.41s)

                                                
                                    
x
+
TestCertOptions (36.12s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-548628 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-548628 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (33.268138433s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-548628 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-548628 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-548628 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-548628" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-548628
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-548628: (2.127000796s)
--- PASS: TestCertOptions (36.12s)

                                                
                                    
x
+
TestCertExpiration (236.82s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-682065 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1210 08:52:27.800293  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-682065 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (34.116100085s)
E1210 08:53:01.626821  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-682065 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-682065 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (19.844187088s)
helpers_test.go:176: Cleaning up "cert-expiration-682065" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-682065
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-682065: (2.863481467s)
--- PASS: TestCertExpiration (236.82s)

                                                
                                    
x
+
TestForceSystemdFlag (41.45s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-789705 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-789705 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.607174253s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-789705 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-789705" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-789705
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-789705: (2.551514677s)
--- PASS: TestForceSystemdFlag (41.45s)

                                                
                                    
x
+
TestForceSystemdEnv (33.62s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-236305 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-236305 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.035374818s)
helpers_test.go:176: Cleaning up "force-systemd-env-236305" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-236305
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-236305: (2.582798613s)
--- PASS: TestForceSystemdEnv (33.62s)

                                                
                                    
x
+
TestErrorSpam/setup (32.89s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-124596 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-124596 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-124596 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-124596 --driver=docker  --container-runtime=crio: (32.889463488s)
--- PASS: TestErrorSpam/setup (32.89s)

                                                
                                    
x
+
TestErrorSpam/start (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 start --dry-run
--- PASS: TestErrorSpam/start (0.92s)

                                                
                                    
x
+
TestErrorSpam/status (1.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 status
--- PASS: TestErrorSpam/status (1.09s)

                                                
                                    
x
+
TestErrorSpam/pause (6.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 pause: exit status 80 (2.429958311s)

                                                
                                                
-- stdout --
	* Pausing node nospam-124596 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:31:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 pause: exit status 80 (2.473167014s)

                                                
                                                
-- stdout --
	* Pausing node nospam-124596 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:31:51Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 pause: exit status 80 (1.623656808s)

                                                
                                                
-- stdout --
	* Pausing node nospam-124596 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:31:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.36s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 unpause: exit status 80 (1.918930744s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-124596 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:31:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 unpause: exit status 80 (1.781023014s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-124596 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:31:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 unpause: exit status 80 (1.657868643s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-124596 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-10T07:31:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.36s)

                                                
                                    
x
+
TestErrorSpam/stop (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 stop: (1.31603604s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-124596 --log_dir /tmp/nospam-124596 stop
--- PASS: TestErrorSpam/stop (1.57s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.25s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-446865 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1210 07:32:27.808964  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:32:27.815503  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:32:27.826803  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:32:27.848130  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:32:27.889452  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:32:27.970807  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:32:28.132166  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:32:28.453754  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:32:29.095693  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:32:30.377068  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:32:32.939454  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:32:38.061578  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:32:48.302954  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 07:33:08.784411  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-446865 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m19.241136634s)
--- PASS: TestFunctional/serial/StartWithProxy (79.25s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (19.28s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1210 07:33:24.433210  378528 config.go:182] Loaded profile config "functional-446865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-446865 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-446865 --alsologtostderr -v=8: (19.279299372s)
functional_test.go:678: soft start took 19.282447657s for "functional-446865" cluster.
I1210 07:33:43.712865  378528 config.go:182] Loaded profile config "functional-446865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (19.28s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.09s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-446865 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-446865 cache add registry.k8s.io/pause:3.1: (1.252880478s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-446865 cache add registry.k8s.io/pause:3.3: (1.190657942s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-446865 cache add registry.k8s.io/pause:latest: (1.068624451s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-446865 /tmp/TestFunctionalserialCacheCmdcacheadd_local2999979054/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 cache add minikube-local-cache-test:functional-446865
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 cache delete minikube-local-cache-test:functional-446865
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-446865
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E1210 07:33:49.746189  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-446865 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (292.055754ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 kubectl -- --context functional-446865 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-446865 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (58.14s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-446865 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-446865 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (58.143025643s)
functional_test.go:776: restart took 58.143131433s for "functional-446865" cluster.
I1210 07:34:49.536021  378528 config.go:182] Loaded profile config "functional-446865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (58.14s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-446865 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-446865 logs: (1.46487708s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 logs --file /tmp/TestFunctionalserialLogsFileCmd1684418412/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-446865 logs --file /tmp/TestFunctionalserialLogsFileCmd1684418412/001/logs.txt: (1.463625153s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.25s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-446865 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-446865
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-446865: exit status 115 (387.725548ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30204 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-446865 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.25s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-446865 config get cpus: exit status 14 (78.931046ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-446865 config get cpus: exit status 14 (80.351934ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-446865 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-446865 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 404957: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.26s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-446865 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-446865 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (199.215665ms)

                                                
                                                
-- stdout --
	* [functional-446865] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:35:26.531599  403687 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:35:26.531770  403687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:35:26.531782  403687 out.go:374] Setting ErrFile to fd 2...
	I1210 07:35:26.531787  403687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:35:26.532052  403687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:35:26.532434  403687 out.go:368] Setting JSON to false
	I1210 07:35:26.533344  403687 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8277,"bootTime":1765343850,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:35:26.533420  403687 start.go:143] virtualization:  
	I1210 07:35:26.537010  403687 out.go:179] * [functional-446865] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 07:35:26.540030  403687 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:35:26.540149  403687 notify.go:221] Checking for updates...
	I1210 07:35:26.546764  403687 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:35:26.549695  403687 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:35:26.552619  403687 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 07:35:26.555429  403687 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:35:26.558318  403687 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:35:26.561658  403687 config.go:182] Loaded profile config "functional-446865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:35:26.562261  403687 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:35:26.587984  403687 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:35:26.588111  403687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:35:26.658927  403687 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-10 07:35:26.649117418 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:35:26.659102  403687 docker.go:319] overlay module found
	I1210 07:35:26.663971  403687 out.go:179] * Using the docker driver based on existing profile
	I1210 07:35:26.666905  403687 start.go:309] selected driver: docker
	I1210 07:35:26.666943  403687 start.go:927] validating driver "docker" against &{Name:functional-446865 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-446865 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:35:26.667077  403687 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:35:26.670655  403687 out.go:203] 
	W1210 07:35:26.673452  403687 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 07:35:26.676356  403687 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-446865 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-446865 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-446865 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (227.6006ms)

                                                
                                                
-- stdout --
	* [functional-446865] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 07:35:31.971622  404766 out.go:360] Setting OutFile to fd 1 ...
	I1210 07:35:31.971732  404766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:35:31.971741  404766 out.go:374] Setting ErrFile to fd 2...
	I1210 07:35:31.971746  404766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 07:35:31.972689  404766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 07:35:31.973087  404766 out.go:368] Setting JSON to false
	I1210 07:35:31.974089  404766 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8282,"bootTime":1765343850,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 07:35:31.974158  404766 start.go:143] virtualization:  
	I1210 07:35:31.979992  404766 out.go:179] * [functional-446865] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1210 07:35:31.985353  404766 notify.go:221] Checking for updates...
	I1210 07:35:31.989842  404766 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 07:35:31.993358  404766 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 07:35:31.997028  404766 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 07:35:32.002206  404766 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 07:35:32.005829  404766 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 07:35:32.009246  404766 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 07:35:32.012960  404766 config.go:182] Loaded profile config "functional-446865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 07:35:32.013602  404766 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 07:35:32.044742  404766 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 07:35:32.044862  404766 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 07:35:32.106221  404766 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-10 07:35:32.096886023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 07:35:32.106325  404766 docker.go:319] overlay module found
	I1210 07:35:32.109624  404766 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1210 07:35:32.112827  404766 start.go:309] selected driver: docker
	I1210 07:35:32.112872  404766 start.go:927] validating driver "docker" against &{Name:functional-446865 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-446865 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 07:35:32.112971  404766 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 07:35:32.116723  404766 out.go:203] 
	W1210 07:35:32.119815  404766 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 07:35:32.122889  404766 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-446865 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-446865 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-m7lrc" [d2ef2b8d-a979-45ae-978e-b141d7bd98a6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-m7lrc" [d2ef2b8d-a979-45ae-978e-b141d7bd98a6] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.003380261s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31362
functional_test.go:1680: http://192.168.49.2:31362: success! body:
Request served by hello-node-connect-7d85dfc575-m7lrc

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31362
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [3aab4f25-5b68-4ba0-963b-d8dd5b45f72a] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003440878s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-446865 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-446865 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-446865 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-446865 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [fb500576-f3e5-4ee8-9ee3-dbdb2d3c541a] Pending
E1210 07:35:11.667861  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "sp-pod" [fb500576-f3e5-4ee8-9ee3-dbdb2d3c541a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.002738646s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-446865 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-446865 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-446865 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [d854c689-c960-4832-ad00-18e9f4e4275b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [d854c689-c960-4832-ad00-18e9f4e4275b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003914889s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-446865 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.63s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh -n functional-446865 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 cp functional-446865:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3157563298/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh -n functional-446865 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh -n functional-446865 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/378528/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh "sudo cat /etc/test/nested/copy/378528/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/378528.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh "sudo cat /etc/ssl/certs/378528.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/378528.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh "sudo cat /usr/share/ca-certificates/378528.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3785282.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh "sudo cat /etc/ssl/certs/3785282.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3785282.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh "sudo cat /usr/share/ca-certificates/3785282.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-446865 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-446865 ssh "sudo systemctl is-active docker": exit status 1 (366.732785ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-446865 ssh "sudo systemctl is-active containerd": exit status 1 (354.480512ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-446865 version -o=json --components: (1.032336564s)
--- PASS: TestFunctional/parallel/Version/components (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-446865 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-446865
localhost/kicbase/echo-server:functional-446865
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-446865 image ls --format short --alsologtostderr:
I1210 07:35:41.157327  406267 out.go:360] Setting OutFile to fd 1 ...
I1210 07:35:41.157751  406267 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 07:35:41.157795  406267 out.go:374] Setting ErrFile to fd 2...
I1210 07:35:41.157818  406267 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 07:35:41.158281  406267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
I1210 07:35:41.159063  406267 config.go:182] Loaded profile config "functional-446865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 07:35:41.159260  406267 config.go:182] Loaded profile config "functional-446865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 07:35:41.159853  406267 cli_runner.go:164] Run: docker container inspect functional-446865 --format={{.State.Status}}
I1210 07:35:41.184754  406267 ssh_runner.go:195] Run: systemctl --version
I1210 07:35:41.184807  406267 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-446865
I1210 07:35:41.212335  406267 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-446865/id_rsa Username:docker}
I1210 07:35:41.318506  406267 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-446865 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/my-image                      │ functional-446865  │ f423adf0d860e │ 1.64MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 1b34917560f09 │ 72.6MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 94bff1bec29fd │ 75.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 4f982e73e768a │ 51.6MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ localhost/minikube-local-cache-test     │ functional-446865  │ a8d944501884b │ 3.33kB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/kicbase/echo-server           │ latest             │ ce2d2cda2d858 │ 4.79MB │
│ localhost/kicbase/echo-server           │ functional-446865  │ ce2d2cda2d858 │ 4.79MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ cbad6347cca28 │ 54.8MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ b178af3d91f80 │ 84.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-446865 image ls --format table --alsologtostderr:
I1210 07:35:45.749023  406739 out.go:360] Setting OutFile to fd 1 ...
I1210 07:35:45.749223  406739 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 07:35:45.749245  406739 out.go:374] Setting ErrFile to fd 2...
I1210 07:35:45.749262  406739 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 07:35:45.749554  406739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
I1210 07:35:45.750635  406739 config.go:182] Loaded profile config "functional-446865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 07:35:45.750801  406739 config.go:182] Loaded profile config "functional-446865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 07:35:45.752120  406739 cli_runner.go:164] Run: docker container inspect functional-446865 --format={{.State.Status}}
I1210 07:35:45.779763  406739 ssh_runner.go:195] Run: systemctl --version
I1210 07:35:45.779816  406739 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-446865
I1210 07:35:45.798673  406739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-446865/id_rsa Username:docker}
I1210 07:35:45.909630  406739 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-446865 image ls --format json --alsologtostderr:
[{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["public.ecr.aws/nginx/nginx@sha256:6224130b55f5d4f555846ebdedec6ce07822ebf205b9c1b77c2fd91abab6eb25","public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"54827372"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d
31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89","registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"72629077"},{"id":"4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe","registry.
k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"51592021"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhos
t/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-446865"],"size":"4788229"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"2475623
53"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db
242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"e4ff0537408f606dca28592790b3d71f34716c6d730353e20cfccf3e0590dcc0","repoDigests":["docker.io/library/3d2a21da92c1f4b89905bf731841fc5e70440215f20be25f8561950d455dcc2a-tmp@sha256:18ebcac4b6b36d634e1
c107c8595c572ea07c6631612e9b438ea5df3cd5c0e67"],"repoTags":[],"size":"1638179"},{"id":"a8d944501884b3a5e957ec00ae037b4145c0f592c5c81b4d2101ff9ef3d32a34","repoDigests":["localhost/minikube-local-cache-test@sha256:5adff8933634e0e4d1ae46a3119b3c846f9123b79881e107c3ffde21083d96fb"],"repoTags":["localhost/minikube-local-cache-test:functional-446865"],"size":"3330"},{"id":"f423adf0d860e21a8c17af8c9d621d0e3ea5baca4610b2c2244a4fd3fd3467ae","repoDigests":["localhost/my-image@sha256:f1685c23fb1d96382365b034df979a2368fdfbee51ec5228320b9a50b0552c68"],"repoTags":["localhost/my-image:functional-446865"],"size":"1640790"},{"id":"b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84","registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"84753391"},{"id":"94bff1bec29fd04573941f362e44a6
730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"75941783"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-446865 image ls --format json --alsologtostderr:
I1210 07:35:45.454920  406686 out.go:360] Setting OutFile to fd 1 ...
I1210 07:35:45.457003  406686 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 07:35:45.457037  406686 out.go:374] Setting ErrFile to fd 2...
I1210 07:35:45.457056  406686 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 07:35:45.457373  406686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
I1210 07:35:45.458154  406686 config.go:182] Loaded profile config "functional-446865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 07:35:45.458358  406686 config.go:182] Loaded profile config "functional-446865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 07:35:45.458917  406686 cli_runner.go:164] Run: docker container inspect functional-446865 --format={{.State.Status}}
I1210 07:35:45.476819  406686 ssh_runner.go:195] Run: systemctl --version
I1210 07:35:45.476900  406686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-446865
I1210 07:35:45.493031  406686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-446865/id_rsa Username:docker}
I1210 07:35:45.604783  406686 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-446865 image ls --format yaml --alsologtostderr:
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:6224130b55f5d4f555846ebdedec6ce07822ebf205b9c1b77c2fd91abab6eb25
- public.ecr.aws/nginx/nginx@sha256:b7198452993fe37c15651e967713dd500eb4367f80a2d63c3bb5b172e46fc3b5
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "54827372"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "72629077"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "75941783"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-446865
size: "4788229"
- id: a8d944501884b3a5e957ec00ae037b4145c0f592c5c81b4d2101ff9ef3d32a34
repoDigests:
- localhost/minikube-local-cache-test@sha256:5adff8933634e0e4d1ae46a3119b3c846f9123b79881e107c3ffde21083d96fb
repoTags:
- localhost/minikube-local-cache-test:functional-446865
size: "3330"
- id: b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "84753391"
- id: 4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "51592021"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-446865 image ls --format yaml --alsologtostderr:
I1210 07:35:41.428261  406304 out.go:360] Setting OutFile to fd 1 ...
I1210 07:35:41.428379  406304 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 07:35:41.428385  406304 out.go:374] Setting ErrFile to fd 2...
I1210 07:35:41.428390  406304 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 07:35:41.428729  406304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
I1210 07:35:41.429649  406304 config.go:182] Loaded profile config "functional-446865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 07:35:41.429772  406304 config.go:182] Loaded profile config "functional-446865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 07:35:41.433666  406304 cli_runner.go:164] Run: docker container inspect functional-446865 --format={{.State.Status}}
I1210 07:35:41.457065  406304 ssh_runner.go:195] Run: systemctl --version
I1210 07:35:41.457118  406304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-446865
I1210 07:35:41.482371  406304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-446865/id_rsa Username:docker}
I1210 07:35:41.583116  406304 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-446865 ssh pgrep buildkitd: exit status 1 (352.554729ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 image build -t localhost/my-image:functional-446865 testdata/build --alsologtostderr
2025/12/10 07:35:45 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-446865 image build -t localhost/my-image:functional-446865 testdata/build --alsologtostderr: (3.327548954s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-446865 image build -t localhost/my-image:functional-446865 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e4ff0537408
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-446865
--> f423adf0d86
Successfully tagged localhost/my-image:functional-446865
f423adf0d860e21a8c17af8c9d621d0e3ea5baca4610b2c2244a4fd3fd3467ae
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-446865 image build -t localhost/my-image:functional-446865 testdata/build --alsologtostderr:
I1210 07:35:42.319204  406437 out.go:360] Setting OutFile to fd 1 ...
I1210 07:35:42.320095  406437 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 07:35:42.320171  406437 out.go:374] Setting ErrFile to fd 2...
I1210 07:35:42.320198  406437 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 07:35:42.320538  406437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
I1210 07:35:42.321283  406437 config.go:182] Loaded profile config "functional-446865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 07:35:42.322093  406437 config.go:182] Loaded profile config "functional-446865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1210 07:35:42.322702  406437 cli_runner.go:164] Run: docker container inspect functional-446865 --format={{.State.Status}}
I1210 07:35:42.341545  406437 ssh_runner.go:195] Run: systemctl --version
I1210 07:35:42.341606  406437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-446865
I1210 07:35:42.359961  406437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-446865/id_rsa Username:docker}
I1210 07:35:42.457367  406437 build_images.go:162] Building image from path: /tmp/build.632605949.tar
I1210 07:35:42.457444  406437 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 07:35:42.465316  406437 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.632605949.tar
I1210 07:35:42.468973  406437 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.632605949.tar: stat -c "%s %y" /var/lib/minikube/build/build.632605949.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.632605949.tar': No such file or directory
I1210 07:35:42.469005  406437 ssh_runner.go:362] scp /tmp/build.632605949.tar --> /var/lib/minikube/build/build.632605949.tar (3072 bytes)
I1210 07:35:42.487914  406437 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.632605949
I1210 07:35:42.496014  406437 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.632605949 -xf /var/lib/minikube/build/build.632605949.tar
I1210 07:35:42.504378  406437 crio.go:315] Building image: /var/lib/minikube/build/build.632605949
I1210 07:35:42.504452  406437 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-446865 /var/lib/minikube/build/build.632605949 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1210 07:35:45.563898  406437 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-446865 /var/lib/minikube/build/build.632605949 --cgroup-manager=cgroupfs: (3.059420936s)
I1210 07:35:45.563964  406437 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.632605949
I1210 07:35:45.572845  406437 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.632605949.tar
I1210 07:35:45.581275  406437 build_images.go:218] Built localhost/my-image:functional-446865 from /tmp/build.632605949.tar
I1210 07:35:45.581304  406437 build_images.go:134] succeeded building to: functional-446865
I1210 07:35:45.581310  406437 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-446865
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 image load --daemon kicbase/echo-server:functional-446865 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-446865 image load --daemon kicbase/echo-server:functional-446865 --alsologtostderr: (1.456014955s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 image load --daemon kicbase/echo-server:functional-446865 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-446865
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 image load --daemon kicbase/echo-server:functional-446865 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "487.632998ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "68.864862ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "443.784132ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "73.447271ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 image save kicbase/echo-server:functional-446865 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 image rm kicbase/echo-server:functional-446865 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-446865 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-446865 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-446865 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 402745: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-446865 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-446865 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-446865 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [769eb1ee-91d1-4519-92fc-b0449b636dbe] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [769eb1ee-91d1-4519-92fc-b0449b636dbe] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003720425s
I1210 07:35:14.347042  378528 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-446865
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 image save --daemon kicbase/echo-server:functional-446865 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-446865
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-446865 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.159.154 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-446865 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-446865 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-446865 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-ddj6c" [9bf13628-a7c1-48a3-a500-b399685d47ff] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-ddj6c" [9bf13628-a7c1-48a3-a500-b399685d47ff] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004589386s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-446865 /tmp/TestFunctionalparallelMountCmdany-port3273396256/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765352126956482771" to /tmp/TestFunctionalparallelMountCmdany-port3273396256/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765352126956482771" to /tmp/TestFunctionalparallelMountCmdany-port3273396256/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765352126956482771" to /tmp/TestFunctionalparallelMountCmdany-port3273396256/001/test-1765352126956482771
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-446865 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (344.797468ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 07:35:27.303691  378528 retry.go:31] will retry after 516.188455ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 10 07:35 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 10 07:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 10 07:35 test-1765352126956482771
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh cat /mount-9p/test-1765352126956482771
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-446865 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [274179f6-68e1-4035-9f55-bcdd8004d453] Pending
helpers_test.go:353: "busybox-mount" [274179f6-68e1-4035-9f55-bcdd8004d453] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [274179f6-68e1-4035-9f55-bcdd8004d453] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [274179f6-68e1-4035-9f55-bcdd8004d453] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003555878s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-446865 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-446865 /tmp/TestFunctionalparallelMountCmdany-port3273396256/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 service list -o json
functional_test.go:1504: Took "535.942328ms" to run "out/minikube-linux-arm64 -p functional-446865 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31183
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31183
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-446865 /tmp/TestFunctionalparallelMountCmdspecific-port2634105139/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-446865 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (470.276654ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 07:35:35.719383  378528 retry.go:31] will retry after 333.791046ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-446865 /tmp/TestFunctionalparallelMountCmdspecific-port2634105139/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-446865 ssh "sudo umount -f /mount-9p": exit status 1 (339.197978ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-446865 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-446865 /tmp/TestFunctionalparallelMountCmdspecific-port2634105139/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-446865 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1456434196/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-446865 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1456434196/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-446865 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1456434196/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-446865 ssh "findmnt -T" /mount1: (1.120436064s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-446865 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-446865 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-446865 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1456434196/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-446865 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1456434196/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-446865 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1456434196/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.95s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-446865
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-446865
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-446865
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22089-376671/.minikube/files/etc/test/nested/copy/378528/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-314220 cache add registry.k8s.io/pause:3.1: (1.199315201s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-314220 cache add registry.k8s.io/pause:3.3: (1.13339475s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-314220 cache add registry.k8s.io/pause:latest: (1.102539548s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-314220 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach1277999584/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 cache add minikube-local-cache-test:functional-314220
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 cache delete minikube-local-cache-test:functional-314220
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-314220
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-314220 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (284.015458ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs267782939/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-314220 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs267782939/001/logs.txt: (1.002457843s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-314220 config get cpus: exit status 14 (81.061813ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-314220 config get cpus: exit status 14 (70.541991ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-314220 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-314220 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (201.816325ms)

                                                
                                                
-- stdout --
	* [functional-314220] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 08:05:06.948612  437589 out.go:360] Setting OutFile to fd 1 ...
	I1210 08:05:06.948754  437589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:05:06.948766  437589 out.go:374] Setting ErrFile to fd 2...
	I1210 08:05:06.948771  437589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:05:06.949590  437589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 08:05:06.950067  437589 out.go:368] Setting JSON to false
	I1210 08:05:06.950915  437589 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10057,"bootTime":1765343850,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 08:05:06.951047  437589 start.go:143] virtualization:  
	I1210 08:05:06.954856  437589 out.go:179] * [functional-314220] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1210 08:05:06.958797  437589 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 08:05:06.958935  437589 notify.go:221] Checking for updates...
	I1210 08:05:06.964452  437589 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 08:05:06.967300  437589 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 08:05:06.970271  437589 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 08:05:06.973148  437589 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 08:05:06.976005  437589 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 08:05:06.979673  437589 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 08:05:06.980328  437589 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 08:05:07.012115  437589 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 08:05:07.012262  437589 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 08:05:07.078043  437589 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 08:05:07.067922424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 08:05:07.078146  437589 docker.go:319] overlay module found
	I1210 08:05:07.081360  437589 out.go:179] * Using the docker driver based on existing profile
	I1210 08:05:07.084297  437589 start.go:309] selected driver: docker
	I1210 08:05:07.084338  437589 start.go:927] validating driver "docker" against &{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 08:05:07.084444  437589 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 08:05:07.088209  437589 out.go:203] 
	W1210 08:05:07.091219  437589 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 08:05:07.094022  437589 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-314220 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-314220 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-314220 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (210.634623ms)

                                                
                                                
-- stdout --
	* [functional-314220] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 08:05:07.394971  437710 out.go:360] Setting OutFile to fd 1 ...
	I1210 08:05:07.395185  437710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:05:07.395218  437710 out.go:374] Setting ErrFile to fd 2...
	I1210 08:05:07.395239  437710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:05:07.395879  437710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 08:05:07.396304  437710 out.go:368] Setting JSON to false
	I1210 08:05:07.397159  437710 start.go:133] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10058,"bootTime":1765343850,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1210 08:05:07.397255  437710 start.go:143] virtualization:  
	I1210 08:05:07.400545  437710 out.go:179] * [functional-314220] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1210 08:05:07.403482  437710 out.go:179]   - MINIKUBE_LOCATION=22089
	I1210 08:05:07.403567  437710 notify.go:221] Checking for updates...
	I1210 08:05:07.409228  437710 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 08:05:07.412142  437710 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	I1210 08:05:07.415611  437710 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	I1210 08:05:07.418405  437710 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1210 08:05:07.421215  437710 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 08:05:07.424553  437710 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1210 08:05:07.425232  437710 driver.go:422] Setting default libvirt URI to qemu:///system
	I1210 08:05:07.456710  437710 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1210 08:05:07.456836  437710 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 08:05:07.529584  437710 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-10 08:05:07.520295348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 08:05:07.529696  437710 docker.go:319] overlay module found
	I1210 08:05:07.532635  437710 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1210 08:05:07.535467  437710 start.go:309] selected driver: docker
	I1210 08:05:07.535486  437710 start.go:927] validating driver "docker" against &{Name:functional-314220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765319469-22089@sha256:ee6740d69848e67faff1932b2b17cde529e2507f2de6c38fad140aad19064fca Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-314220 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 08:05:07.535585  437710 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 08:05:07.539094  437710 out.go:203] 
	W1210 08:05:07.541939  437710 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 08:05:07.544735  437710 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (1.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (1.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh -n functional-314220 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 cp functional-314220:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3174646371/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh -n functional-314220 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh -n functional-314220 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/378528/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh "sudo cat /etc/test/nested/copy/378528/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (2.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/378528.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh "sudo cat /etc/ssl/certs/378528.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/378528.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh "sudo cat /usr/share/ca-certificates/378528.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3785282.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh "sudo cat /etc/ssl/certs/3785282.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3785282.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh "sudo cat /usr/share/ca-certificates/3785282.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (2.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-314220 ssh "sudo systemctl is-active docker": exit status 1 (363.29706ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-314220 ssh "sudo systemctl is-active containerd": exit status 1 (322.276113ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-314220 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-314220
localhost/kicbase/echo-server:functional-314220
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-314220 image ls --format short --alsologtostderr:
I1210 08:05:10.314491  438366 out.go:360] Setting OutFile to fd 1 ...
I1210 08:05:10.314731  438366 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 08:05:10.314924  438366 out.go:374] Setting ErrFile to fd 2...
I1210 08:05:10.315041  438366 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 08:05:10.315368  438366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
I1210 08:05:10.316062  438366 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 08:05:10.316236  438366 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 08:05:10.316808  438366 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
I1210 08:05:10.336463  438366 ssh_runner.go:195] Run: systemctl --version
I1210 08:05:10.336513  438366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
I1210 08:05:10.354327  438366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
I1210 08:05:10.449827  438366 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-314220 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 68b5f775f1876 │ 72.2MB │
│ localhost/kicbase/echo-server           │ functional-314220  │ ce2d2cda2d858 │ 4.79MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ ccd634d9bcc36 │ 85MB   │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 404c2e1286177 │ 74.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 16378741539f1 │ 49.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/minikube-local-cache-test     │ functional-314220  │ a8d944501884b │ 3.33kB │
│ localhost/my-image                      │ functional-314220  │ eb0ccf50d86ca │ 1.64MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-314220 image ls --format table --alsologtostderr:
I1210 08:05:14.813248  438865 out.go:360] Setting OutFile to fd 1 ...
I1210 08:05:14.813467  438865 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 08:05:14.813496  438865 out.go:374] Setting ErrFile to fd 2...
I1210 08:05:14.813515  438865 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 08:05:14.813793  438865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
I1210 08:05:14.814488  438865 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 08:05:14.814678  438865 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 08:05:14.815272  438865 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
I1210 08:05:14.838434  438865 ssh_runner.go:195] Run: systemctl --version
I1210 08:05:14.838490  438865 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
I1210 08:05:14.856735  438865 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
I1210 08:05:14.957916  438865 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-314220 image ls --format json --alsologtostderr:
[{"id":"69dfb6213829e68a185eca5531a771f9394cbef531b9a1b20c2b20de6a2c3fed","repoDigests":["docker.io/library/1d7e16bf3c80c9625da26e924cb94374e755348820d9974e07b02b133972a173-tmp@sha256:087ecdcf019fa4dadf51b989c1e811a00381769491b49e36931e8a8b800b8222"],"repoTags":[],"size":"1638178"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"],"repoTags":["registry.k8s.io/kube-apiser
ver:v1.35.0-beta.0"],"size":"84949999"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"a8d944501884b3a5e957ec00ae037b4145c0f592c5c81b4d2101ff9ef3d32a34","repoDigests":["localhost/minikube-local-cache-test@sha256:5adff8933634e0e4d1ae46a3119b3c846f9123b79881e107c3ffde21083d96fb"],"repoTags":["localhost/minikube-local-cache-test:functional-314220"],"size":"3330"},{"id":"404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478","registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"741
06775"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491780"},{"id":"16378741539f1be9c6e347d127537d
379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"49822549"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc9
1729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-314220"],"size":"4788229"},{"id":"eb0ccf50d86ca1cb667ad6c6c7f3977a027d1997babd6b09614e8cf9c9998eb5","repoDigests":["localhost/my-image@sha256:92fd01979e97385757928896ec90610e00fbcdc4d9a128fd82ddc44ab863de9e"],"repoTags":["localhost/my-image:functional-314220"],"size":"1640790"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/k
ube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"72170325"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-314220 image ls --format json --alsologtostderr:
I1210 08:05:14.593238  438829 out.go:360] Setting OutFile to fd 1 ...
I1210 08:05:14.593450  438829 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 08:05:14.593480  438829 out.go:374] Setting ErrFile to fd 2...
I1210 08:05:14.593499  438829 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 08:05:14.593911  438829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
I1210 08:05:14.595043  438829 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 08:05:14.595308  438829 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 08:05:14.596345  438829 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
I1210 08:05:14.614109  438829 ssh_runner.go:195] Run: systemctl --version
I1210 08:05:14.614171  438829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
I1210 08:05:14.630751  438829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
I1210 08:05:14.725661  438829 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-314220 image ls --format yaml --alsologtostderr:
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "84949999"
- id: 68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "72170325"
- id: 16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "49822549"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-314220
size: "4788229"
- id: a8d944501884b3a5e957ec00ae037b4145c0f592c5c81b4d2101ff9ef3d32a34
repoDigests:
- localhost/minikube-local-cache-test@sha256:5adff8933634e0e4d1ae46a3119b3c846f9123b79881e107c3ffde21083d96fb
repoTags:
- localhost/minikube-local-cache-test:functional-314220
size: "3330"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "74106775"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-314220 image ls --format yaml --alsologtostderr:
I1210 08:05:10.548553  438409 out.go:360] Setting OutFile to fd 1 ...
I1210 08:05:10.548750  438409 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 08:05:10.548778  438409 out.go:374] Setting ErrFile to fd 2...
I1210 08:05:10.548800  438409 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 08:05:10.549660  438409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
I1210 08:05:10.550355  438409 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 08:05:10.550478  438409 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 08:05:10.550973  438409 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
I1210 08:05:10.568248  438409 ssh_runner.go:195] Run: systemctl --version
I1210 08:05:10.568304  438409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
I1210 08:05:10.585046  438409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
I1210 08:05:10.677480  438409 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-314220 ssh pgrep buildkitd: exit status 1 (265.912034ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 image build -t localhost/my-image:functional-314220 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-314220 image build -t localhost/my-image:functional-314220 testdata/build --alsologtostderr: (3.340687508s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-314220 image build -t localhost/my-image:functional-314220 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 69dfb621382
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-314220
--> eb0ccf50d86
Successfully tagged localhost/my-image:functional-314220
eb0ccf50d86ca1cb667ad6c6c7f3977a027d1997babd6b09614e8cf9c9998eb5
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-314220 image build -t localhost/my-image:functional-314220 testdata/build --alsologtostderr:
I1210 08:05:11.027754  438508 out.go:360] Setting OutFile to fd 1 ...
I1210 08:05:11.027937  438508 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 08:05:11.027971  438508 out.go:374] Setting ErrFile to fd 2...
I1210 08:05:11.027992  438508 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1210 08:05:11.028262  438508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
I1210 08:05:11.028925  438508 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 08:05:11.029607  438508 config.go:182] Loaded profile config "functional-314220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1210 08:05:11.030184  438508 cli_runner.go:164] Run: docker container inspect functional-314220 --format={{.State.Status}}
I1210 08:05:11.047146  438508 ssh_runner.go:195] Run: systemctl --version
I1210 08:05:11.047204  438508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-314220
I1210 08:05:11.067365  438508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/functional-314220/id_rsa Username:docker}
I1210 08:05:11.168677  438508 build_images.go:162] Building image from path: /tmp/build.3382950696.tar
I1210 08:05:11.168759  438508 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 08:05:11.177252  438508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3382950696.tar
I1210 08:05:11.181539  438508 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3382950696.tar: stat -c "%s %y" /var/lib/minikube/build/build.3382950696.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3382950696.tar': No such file or directory
I1210 08:05:11.181575  438508 ssh_runner.go:362] scp /tmp/build.3382950696.tar --> /var/lib/minikube/build/build.3382950696.tar (3072 bytes)
I1210 08:05:11.200640  438508 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3382950696
I1210 08:05:11.208676  438508 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3382950696 -xf /var/lib/minikube/build/build.3382950696.tar
I1210 08:05:11.216965  438508 crio.go:315] Building image: /var/lib/minikube/build/build.3382950696
I1210 08:05:11.217110  438508 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-314220 /var/lib/minikube/build/build.3382950696 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1210 08:05:14.294763  438508 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-314220 /var/lib/minikube/build/build.3382950696 --cgroup-manager=cgroupfs: (3.077611272s)
I1210 08:05:14.294839  438508 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3382950696
I1210 08:05:14.302739  438508 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3382950696.tar
I1210 08:05:14.310137  438508 build_images.go:218] Built localhost/my-image:functional-314220 from /tmp/build.3382950696.tar
I1210 08:05:14.310169  438508 build_images.go:134] succeeded building to: functional-314220
I1210 08:05:14.310175  438508 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-314220
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 image load --daemon kicbase/echo-server:functional-314220 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-314220 image load --daemon kicbase/echo-server:functional-314220 --alsologtostderr: (1.278744655s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 image load --daemon kicbase/echo-server:functional-314220 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-314220
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 image load --daemon kicbase/echo-server:functional-314220 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 image save kicbase/echo-server:functional-314220 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 image rm kicbase/echo-server:functional-314220 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-314220
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 image save --daemon kicbase/echo-server:functional-314220 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-314220
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-314220 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-314220 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "588.130765ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "61.722555ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "348.464738ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "56.955339ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-314220 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1328832652/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-314220 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (351.433616ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 08:05:03.498249  378528 retry.go:31] will retry after 436.854516ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-314220 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1328832652/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-314220 ssh "sudo umount -f /mount-9p": exit status 1 (273.529334ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-314220 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-314220 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1328832652/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
E1210 08:05:04.882734  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-314220 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2494990345/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-314220 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2494990345/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-314220 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2494990345/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-314220 ssh "findmnt -T" /mount1: exit status 1 (632.751015ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 08:05:05.610079  378528 retry.go:31] will retry after 388.475603ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-314220 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-314220 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-314220 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2494990345/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-314220 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2494990345/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-314220 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2494990345/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-314220
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-314220
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-314220
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (195.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1210 08:07:27.800584  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:08:01.624581  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:08:01.630987  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:08:01.642372  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:08:01.663715  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:08:01.705095  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:08:01.786496  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:08:01.947969  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:08:02.269642  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:08:02.911136  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:08:04.192953  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:08:06.755152  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:08:11.876502  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:08:22.117779  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:08:42.599352  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:09:23.560923  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:10:04.883146  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-769978 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m14.497896289s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (195.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-769978 kubectl -- rollout status deployment/busybox: (4.729686971s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 kubectl -- exec busybox-7b57f96db7-926z4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 kubectl -- exec busybox-7b57f96db7-z4blq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 kubectl -- exec busybox-7b57f96db7-zwnnn -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 kubectl -- exec busybox-7b57f96db7-926z4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 kubectl -- exec busybox-7b57f96db7-z4blq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 kubectl -- exec busybox-7b57f96db7-zwnnn -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 kubectl -- exec busybox-7b57f96db7-926z4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 kubectl -- exec busybox-7b57f96db7-z4blq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 kubectl -- exec busybox-7b57f96db7-zwnnn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 kubectl -- exec busybox-7b57f96db7-926z4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 kubectl -- exec busybox-7b57f96db7-926z4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 kubectl -- exec busybox-7b57f96db7-z4blq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 kubectl -- exec busybox-7b57f96db7-z4blq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 kubectl -- exec busybox-7b57f96db7-zwnnn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 kubectl -- exec busybox-7b57f96db7-zwnnn -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 node add --alsologtostderr -v 5
E1210 08:10:45.484303  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-769978 node add --alsologtostderr -v 5: (58.176249648s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-769978 status --alsologtostderr -v 5: (1.087546349s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-769978 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.006702814s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 cp testdata/cp-test.txt ha-769978:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 cp ha-769978:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile668116162/001/cp-test_ha-769978.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 cp ha-769978:/home/docker/cp-test.txt ha-769978-m02:/home/docker/cp-test_ha-769978_ha-769978-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m02 "sudo cat /home/docker/cp-test_ha-769978_ha-769978-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 cp ha-769978:/home/docker/cp-test.txt ha-769978-m03:/home/docker/cp-test_ha-769978_ha-769978-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m03 "sudo cat /home/docker/cp-test_ha-769978_ha-769978-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 cp ha-769978:/home/docker/cp-test.txt ha-769978-m04:/home/docker/cp-test_ha-769978_ha-769978-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m04 "sudo cat /home/docker/cp-test_ha-769978_ha-769978-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 cp testdata/cp-test.txt ha-769978-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 cp ha-769978-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile668116162/001/cp-test_ha-769978-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 cp ha-769978-m02:/home/docker/cp-test.txt ha-769978:/home/docker/cp-test_ha-769978-m02_ha-769978.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978 "sudo cat /home/docker/cp-test_ha-769978-m02_ha-769978.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 cp ha-769978-m02:/home/docker/cp-test.txt ha-769978-m03:/home/docker/cp-test_ha-769978-m02_ha-769978-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m03 "sudo cat /home/docker/cp-test_ha-769978-m02_ha-769978-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 cp ha-769978-m02:/home/docker/cp-test.txt ha-769978-m04:/home/docker/cp-test_ha-769978-m02_ha-769978-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m04 "sudo cat /home/docker/cp-test_ha-769978-m02_ha-769978-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 cp testdata/cp-test.txt ha-769978-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 cp ha-769978-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile668116162/001/cp-test_ha-769978-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 cp ha-769978-m03:/home/docker/cp-test.txt ha-769978:/home/docker/cp-test_ha-769978-m03_ha-769978.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978 "sudo cat /home/docker/cp-test_ha-769978-m03_ha-769978.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 cp ha-769978-m03:/home/docker/cp-test.txt ha-769978-m02:/home/docker/cp-test_ha-769978-m03_ha-769978-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m02 "sudo cat /home/docker/cp-test_ha-769978-m03_ha-769978-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 cp ha-769978-m03:/home/docker/cp-test.txt ha-769978-m04:/home/docker/cp-test_ha-769978-m03_ha-769978-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m04 "sudo cat /home/docker/cp-test_ha-769978-m03_ha-769978-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 cp testdata/cp-test.txt ha-769978-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 cp ha-769978-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile668116162/001/cp-test_ha-769978-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 cp ha-769978-m04:/home/docker/cp-test.txt ha-769978:/home/docker/cp-test_ha-769978-m04_ha-769978.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978 "sudo cat /home/docker/cp-test_ha-769978-m04_ha-769978.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 cp ha-769978-m04:/home/docker/cp-test.txt ha-769978-m02:/home/docker/cp-test_ha-769978-m04_ha-769978-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m02 "sudo cat /home/docker/cp-test_ha-769978-m04_ha-769978-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 cp ha-769978-m04:/home/docker/cp-test.txt ha-769978-m03:/home/docker/cp-test_ha-769978-m04_ha-769978-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 ssh -n ha-769978-m03 "sudo cat /home/docker/cp-test_ha-769978-m04_ha-769978-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-769978 node stop m02 --alsologtostderr -v 5: (12.056071004s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-769978 status --alsologtostderr -v 5: exit status 7 (811.688259ms)

                                                
                                                
-- stdout --
	ha-769978
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-769978-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-769978-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-769978-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 08:12:04.632708  454593 out.go:360] Setting OutFile to fd 1 ...
	I1210 08:12:04.632880  454593 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:12:04.632912  454593 out.go:374] Setting ErrFile to fd 2...
	I1210 08:12:04.632933  454593 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:12:04.633226  454593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 08:12:04.633451  454593 out.go:368] Setting JSON to false
	I1210 08:12:04.633514  454593 mustload.go:66] Loading cluster: ha-769978
	I1210 08:12:04.633652  454593 notify.go:221] Checking for updates...
	I1210 08:12:04.634064  454593 config.go:182] Loaded profile config "ha-769978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 08:12:04.634107  454593 status.go:174] checking status of ha-769978 ...
	I1210 08:12:04.634773  454593 cli_runner.go:164] Run: docker container inspect ha-769978 --format={{.State.Status}}
	I1210 08:12:04.663579  454593 status.go:371] ha-769978 host status = "Running" (err=<nil>)
	I1210 08:12:04.663605  454593 host.go:66] Checking if "ha-769978" exists ...
	I1210 08:12:04.663935  454593 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-769978
	I1210 08:12:04.689538  454593 host.go:66] Checking if "ha-769978" exists ...
	I1210 08:12:04.689970  454593 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 08:12:04.690040  454593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-769978
	I1210 08:12:04.712081  454593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/ha-769978/id_rsa Username:docker}
	I1210 08:12:04.815903  454593 ssh_runner.go:195] Run: systemctl --version
	I1210 08:12:04.822867  454593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 08:12:04.840429  454593 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 08:12:04.904699  454593 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-10 08:12:04.894414429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 08:12:04.905252  454593 kubeconfig.go:125] found "ha-769978" server: "https://192.168.49.254:8443"
	I1210 08:12:04.905282  454593 api_server.go:166] Checking apiserver status ...
	I1210 08:12:04.905328  454593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:12:04.918798  454593 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1259/cgroup
	I1210 08:12:04.931074  454593 api_server.go:182] apiserver freezer: "12:freezer:/docker/43be0404ff73484b0ebd6a922cde317fcdab18d337b877350c920db12eb8fb7b/crio/crio-5924966e510aca608323edc18371cd00ecb08f48cb75103489479d12a2260c1a"
	I1210 08:12:04.931177  454593 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/43be0404ff73484b0ebd6a922cde317fcdab18d337b877350c920db12eb8fb7b/crio/crio-5924966e510aca608323edc18371cd00ecb08f48cb75103489479d12a2260c1a/freezer.state
	I1210 08:12:04.941586  454593 api_server.go:204] freezer state: "THAWED"
	I1210 08:12:04.941615  454593 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1210 08:12:04.949927  454593 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1210 08:12:04.949959  454593 status.go:463] ha-769978 apiserver status = Running (err=<nil>)
	I1210 08:12:04.949970  454593 status.go:176] ha-769978 status: &{Name:ha-769978 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 08:12:04.949997  454593 status.go:174] checking status of ha-769978-m02 ...
	I1210 08:12:04.950316  454593 cli_runner.go:164] Run: docker container inspect ha-769978-m02 --format={{.State.Status}}
	I1210 08:12:04.969828  454593 status.go:371] ha-769978-m02 host status = "Stopped" (err=<nil>)
	I1210 08:12:04.969858  454593 status.go:384] host is not running, skipping remaining checks
	I1210 08:12:04.969866  454593 status.go:176] ha-769978-m02 status: &{Name:ha-769978-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 08:12:04.969888  454593 status.go:174] checking status of ha-769978-m03 ...
	I1210 08:12:04.970209  454593 cli_runner.go:164] Run: docker container inspect ha-769978-m03 --format={{.State.Status}}
	I1210 08:12:04.987673  454593 status.go:371] ha-769978-m03 host status = "Running" (err=<nil>)
	I1210 08:12:04.987701  454593 host.go:66] Checking if "ha-769978-m03" exists ...
	I1210 08:12:04.988015  454593 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-769978-m03
	I1210 08:12:05.014892  454593 host.go:66] Checking if "ha-769978-m03" exists ...
	I1210 08:12:05.015318  454593 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 08:12:05.015365  454593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-769978-m03
	I1210 08:12:05.037593  454593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/ha-769978-m03/id_rsa Username:docker}
	I1210 08:12:05.150435  454593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 08:12:05.165435  454593 kubeconfig.go:125] found "ha-769978" server: "https://192.168.49.254:8443"
	I1210 08:12:05.165467  454593 api_server.go:166] Checking apiserver status ...
	I1210 08:12:05.165514  454593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:12:05.179159  454593 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1214/cgroup
	I1210 08:12:05.189675  454593 api_server.go:182] apiserver freezer: "12:freezer:/docker/d9bc5434244cbdc77a256023ccf1397adb6fc7f4a08a47041391d4eda0468fd9/crio/crio-dd30bfb73d6337f8c57e09d84f026120bd1c6b2bdfcfc837fd6edd603defb64f"
	I1210 08:12:05.189748  454593 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d9bc5434244cbdc77a256023ccf1397adb6fc7f4a08a47041391d4eda0468fd9/crio/crio-dd30bfb73d6337f8c57e09d84f026120bd1c6b2bdfcfc837fd6edd603defb64f/freezer.state
	I1210 08:12:05.200044  454593 api_server.go:204] freezer state: "THAWED"
	I1210 08:12:05.200071  454593 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1210 08:12:05.208242  454593 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1210 08:12:05.208271  454593 status.go:463] ha-769978-m03 apiserver status = Running (err=<nil>)
	I1210 08:12:05.208281  454593 status.go:176] ha-769978-m03 status: &{Name:ha-769978-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 08:12:05.208297  454593 status.go:174] checking status of ha-769978-m04 ...
	I1210 08:12:05.208605  454593 cli_runner.go:164] Run: docker container inspect ha-769978-m04 --format={{.State.Status}}
	I1210 08:12:05.226636  454593 status.go:371] ha-769978-m04 host status = "Running" (err=<nil>)
	I1210 08:12:05.226667  454593 host.go:66] Checking if "ha-769978-m04" exists ...
	I1210 08:12:05.226982  454593 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-769978-m04
	I1210 08:12:05.245412  454593 host.go:66] Checking if "ha-769978-m04" exists ...
	I1210 08:12:05.245732  454593 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 08:12:05.245782  454593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-769978-m04
	I1210 08:12:05.282282  454593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/ha-769978-m04/id_rsa Username:docker}
	I1210 08:12:05.376668  454593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 08:12:05.390398  454593 status.go:176] ha-769978-m04 status: &{Name:ha-769978-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (28.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 node start m02 --alsologtostderr -v 5
E1210 08:12:27.800453  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-769978 node start m02 --alsologtostderr -v 5: (27.055497419s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-769978 status --alsologtostderr -v 5: (1.371355805s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (28.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.33300156s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (133.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 stop --alsologtostderr -v 5
E1210 08:13:01.625684  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:13:07.948708  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-769978 stop --alsologtostderr -v 5: (37.770812491s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 start --wait true --alsologtostderr -v 5
E1210 08:13:29.326114  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-769978 start --wait true --alsologtostderr -v 5: (1m35.252118658s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (133.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-769978 node delete m03 --alsologtostderr -v 5: (11.261128477s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 stop --alsologtostderr -v 5
E1210 08:15:04.883077  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-769978 stop --alsologtostderr -v 5: (35.912800277s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-769978 status --alsologtostderr -v 5: exit status 7 (116.115713ms)

                                                
                                                
-- stdout --
	ha-769978
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-769978-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-769978-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 08:15:38.376220  466689 out.go:360] Setting OutFile to fd 1 ...
	I1210 08:15:38.376479  466689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:15:38.376509  466689 out.go:374] Setting ErrFile to fd 2...
	I1210 08:15:38.376531  466689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:15:38.376831  466689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 08:15:38.377074  466689 out.go:368] Setting JSON to false
	I1210 08:15:38.377133  466689 mustload.go:66] Loading cluster: ha-769978
	I1210 08:15:38.377221  466689 notify.go:221] Checking for updates...
	I1210 08:15:38.377635  466689 config.go:182] Loaded profile config "ha-769978": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 08:15:38.377676  466689 status.go:174] checking status of ha-769978 ...
	I1210 08:15:38.378229  466689 cli_runner.go:164] Run: docker container inspect ha-769978 --format={{.State.Status}}
	I1210 08:15:38.397156  466689 status.go:371] ha-769978 host status = "Stopped" (err=<nil>)
	I1210 08:15:38.397178  466689 status.go:384] host is not running, skipping remaining checks
	I1210 08:15:38.397185  466689 status.go:176] ha-769978 status: &{Name:ha-769978 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 08:15:38.397224  466689 status.go:174] checking status of ha-769978-m02 ...
	I1210 08:15:38.397548  466689 cli_runner.go:164] Run: docker container inspect ha-769978-m02 --format={{.State.Status}}
	I1210 08:15:38.417220  466689 status.go:371] ha-769978-m02 host status = "Stopped" (err=<nil>)
	I1210 08:15:38.417245  466689 status.go:384] host is not running, skipping remaining checks
	I1210 08:15:38.417252  466689 status.go:176] ha-769978-m02 status: &{Name:ha-769978-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 08:15:38.417271  466689 status.go:174] checking status of ha-769978-m04 ...
	I1210 08:15:38.417564  466689 cli_runner.go:164] Run: docker container inspect ha-769978-m04 --format={{.State.Status}}
	I1210 08:15:38.440211  466689 status.go:371] ha-769978-m04 host status = "Stopped" (err=<nil>)
	I1210 08:15:38.440233  466689 status.go:384] host is not running, skipping remaining checks
	I1210 08:15:38.440240  466689 status.go:176] ha-769978-m04 status: &{Name:ha-769978-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (161.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1210 08:17:27.800461  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:18:01.624732  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-769978 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m40.624668148s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (161.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-769978 node add --control-plane --alsologtostderr -v 5: (1m20.665951998s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-769978 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-769978 status --alsologtostderr -v 5: (1.162390203s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.039108342s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.04s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.4s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-878886 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1210 08:20:04.883131  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-878886 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m21.398424044s)
--- PASS: TestJSONOutput/start/Command (81.40s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-878886 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-878886 --output=json --user=testUser: (5.830634035s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-864377 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-864377 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (99.299898ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5b25b365-cc69-4805-a69f-f1c0ee2f455b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-864377] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"549d1236-8c20-4ede-a6b0-afbb6856d886","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22089"}}
	{"specversion":"1.0","id":"24e6b810-87a2-4a94-860a-a6b71b0df3f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c8a3ab3c-5a6f-4c9b-bf3c-0e35ce01d10e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig"}}
	{"specversion":"1.0","id":"e55c65ed-e6df-49b8-8f83-0d7922b358b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube"}}
	{"specversion":"1.0","id":"e22a4440-33d3-4b2e-9e5a-3da84f2d136e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"85eff528-8f07-4161-9659-ab5b40c825d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6ef4bf24-fdf0-4372-8ee6-c740156ed9c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-864377" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-864377
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.49s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-093253 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-093253 --network=: (36.231233682s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-093253" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-093253
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-093253: (2.233132274s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.49s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-054815 --network=bridge
E1210 08:22:10.877127  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:22:27.800478  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-054815 --network=bridge: (31.840231489s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-054815" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-054815
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-054815: (2.133881497s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.00s)

                                                
                                    
x
+
TestKicExistingNetwork (36.11s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1210 08:22:41.367024  378528 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1210 08:22:41.382872  378528 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1210 08:22:41.382952  378528 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1210 08:22:41.382969  378528 cli_runner.go:164] Run: docker network inspect existing-network
W1210 08:22:41.399640  378528 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1210 08:22:41.399669  378528 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1210 08:22:41.399684  378528 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1210 08:22:41.399810  378528 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1210 08:22:41.417343  378528 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8c4372b9c6ca IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:2b:b4:53:83:a1} reservation:<nil>}
I1210 08:22:41.417653  378528 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c0400}
I1210 08:22:41.417673  378528 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1210 08:22:41.417724  378528 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1210 08:22:41.477046  378528 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-068048 --network=existing-network
E1210 08:23:01.627181  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-068048 --network=existing-network: (33.873388667s)
helpers_test.go:176: Cleaning up "existing-network-068048" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-068048
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-068048: (2.098812376s)
I1210 08:23:17.466040  378528 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.11s)

                                                
                                    
x
+
TestKicCustomSubnet (35.22s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-552824 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-552824 --subnet=192.168.60.0/24: (32.91495838s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-552824 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-552824" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-552824
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-552824: (2.281524334s)
--- PASS: TestKicCustomSubnet (35.22s)

                                                
                                    
x
+
TestKicStaticIP (38.03s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-264922 --static-ip=192.168.200.200
E1210 08:24:24.690287  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-264922 --static-ip=192.168.200.200: (35.667671714s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-264922 ip
helpers_test.go:176: Cleaning up "static-ip-264922" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-264922
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-264922: (2.199024756s)
--- PASS: TestKicStaticIP (38.03s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (72.46s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-155314 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-155314 --driver=docker  --container-runtime=crio: (32.710338895s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-157948 --driver=docker  --container-runtime=crio
E1210 08:25:04.883403  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-157948 --driver=docker  --container-runtime=crio: (33.863145111s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-155314
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-157948
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-157948" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-157948
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-157948: (2.062664091s)
helpers_test.go:176: Cleaning up "first-155314" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-155314
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-155314: (2.405964163s)
--- PASS: TestMinikubeProfile (72.46s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.52s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-226312 --memory=3072 --mount-string /tmp/TestMountStartserial4113666093/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-226312 --memory=3072 --mount-string /tmp/TestMountStartserial4113666093/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.520596212s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-226312 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-228158 --memory=3072 --mount-string /tmp/TestMountStartserial4113666093/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-228158 --memory=3072 --mount-string /tmp/TestMountStartserial4113666093/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.308600488s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-228158 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.92s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-226312 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-226312 --alsologtostderr -v=5: (1.915404597s)
--- PASS: TestMountStart/serial/DeleteFirst (1.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-228158 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-228158
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-228158: (1.307338756s)
--- PASS: TestMountStart/serial/Stop (1.31s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.14s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-228158
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-228158: (7.143454054s)
--- PASS: TestMountStart/serial/RestartStopped (8.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-228158 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (135.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-894706 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1210 08:27:27.800495  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:28:01.623785  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-894706 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m15.268781348s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (135.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-894706 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-894706 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-894706 -- rollout status deployment/busybox: (3.03151453s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-894706 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-894706 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-894706 -- exec busybox-7b57f96db7-lq4p6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-894706 -- exec busybox-7b57f96db7-vndl4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-894706 -- exec busybox-7b57f96db7-lq4p6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-894706 -- exec busybox-7b57f96db7-vndl4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-894706 -- exec busybox-7b57f96db7-lq4p6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-894706 -- exec busybox-7b57f96db7-vndl4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.78s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-894706 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-894706 -- exec busybox-7b57f96db7-lq4p6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-894706 -- exec busybox-7b57f96db7-lq4p6 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-894706 -- exec busybox-7b57f96db7-vndl4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-894706 -- exec busybox-7b57f96db7-vndl4 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-894706 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-894706 -v=5 --alsologtostderr: (57.007244063s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.69s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-894706 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 cp testdata/cp-test.txt multinode-894706:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 ssh -n multinode-894706 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 cp multinode-894706:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile241734284/001/cp-test_multinode-894706.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 ssh -n multinode-894706 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 cp multinode-894706:/home/docker/cp-test.txt multinode-894706-m02:/home/docker/cp-test_multinode-894706_multinode-894706-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 ssh -n multinode-894706 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 ssh -n multinode-894706-m02 "sudo cat /home/docker/cp-test_multinode-894706_multinode-894706-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 cp multinode-894706:/home/docker/cp-test.txt multinode-894706-m03:/home/docker/cp-test_multinode-894706_multinode-894706-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 ssh -n multinode-894706 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 ssh -n multinode-894706-m03 "sudo cat /home/docker/cp-test_multinode-894706_multinode-894706-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 cp testdata/cp-test.txt multinode-894706-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 ssh -n multinode-894706-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 cp multinode-894706-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile241734284/001/cp-test_multinode-894706-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 ssh -n multinode-894706-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 cp multinode-894706-m02:/home/docker/cp-test.txt multinode-894706:/home/docker/cp-test_multinode-894706-m02_multinode-894706.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 ssh -n multinode-894706-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 ssh -n multinode-894706 "sudo cat /home/docker/cp-test_multinode-894706-m02_multinode-894706.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 cp multinode-894706-m02:/home/docker/cp-test.txt multinode-894706-m03:/home/docker/cp-test_multinode-894706-m02_multinode-894706-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 ssh -n multinode-894706-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 ssh -n multinode-894706-m03 "sudo cat /home/docker/cp-test_multinode-894706-m02_multinode-894706-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 cp testdata/cp-test.txt multinode-894706-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 ssh -n multinode-894706-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 cp multinode-894706-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile241734284/001/cp-test_multinode-894706-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 ssh -n multinode-894706-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 cp multinode-894706-m03:/home/docker/cp-test.txt multinode-894706:/home/docker/cp-test_multinode-894706-m03_multinode-894706.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 ssh -n multinode-894706-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 ssh -n multinode-894706 "sudo cat /home/docker/cp-test_multinode-894706-m03_multinode-894706.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 cp multinode-894706-m03:/home/docker/cp-test.txt multinode-894706-m02:/home/docker/cp-test_multinode-894706-m03_multinode-894706-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 ssh -n multinode-894706-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 ssh -n multinode-894706-m02 "sudo cat /home/docker/cp-test_multinode-894706-m03_multinode-894706-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-894706 node stop m03: (1.327105902s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-894706 status: exit status 7 (535.478961ms)

                                                
                                                
-- stdout --
	multinode-894706
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-894706-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-894706-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-894706 status --alsologtostderr: exit status 7 (635.10788ms)

                                                
                                                
-- stdout --
	multinode-894706
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-894706-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-894706-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 08:29:44.635498  517270 out.go:360] Setting OutFile to fd 1 ...
	I1210 08:29:44.635645  517270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:29:44.635671  517270 out.go:374] Setting ErrFile to fd 2...
	I1210 08:29:44.635689  517270 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:29:44.635965  517270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 08:29:44.636193  517270 out.go:368] Setting JSON to false
	I1210 08:29:44.636242  517270 mustload.go:66] Loading cluster: multinode-894706
	I1210 08:29:44.636336  517270 notify.go:221] Checking for updates...
	I1210 08:29:44.636746  517270 config.go:182] Loaded profile config "multinode-894706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 08:29:44.636773  517270 status.go:174] checking status of multinode-894706 ...
	I1210 08:29:44.637491  517270 cli_runner.go:164] Run: docker container inspect multinode-894706 --format={{.State.Status}}
	I1210 08:29:44.662262  517270 status.go:371] multinode-894706 host status = "Running" (err=<nil>)
	I1210 08:29:44.662285  517270 host.go:66] Checking if "multinode-894706" exists ...
	I1210 08:29:44.662594  517270 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-894706
	I1210 08:29:44.689102  517270 host.go:66] Checking if "multinode-894706" exists ...
	I1210 08:29:44.689620  517270 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 08:29:44.689666  517270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-894706
	I1210 08:29:44.712102  517270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33283 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/multinode-894706/id_rsa Username:docker}
	I1210 08:29:44.813148  517270 ssh_runner.go:195] Run: systemctl --version
	I1210 08:29:44.821859  517270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 08:29:44.839403  517270 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 08:29:44.907553  517270 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-10 08:29:44.897384552 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1210 08:29:44.908135  517270 kubeconfig.go:125] found "multinode-894706" server: "https://192.168.67.2:8443"
	I1210 08:29:44.908164  517270 api_server.go:166] Checking apiserver status ...
	I1210 08:29:44.908215  517270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 08:29:44.919590  517270 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1240/cgroup
	I1210 08:29:44.927840  517270 api_server.go:182] apiserver freezer: "12:freezer:/docker/37a89390cb05b007560f4cf48be517c66b2ec736a5a2b34c4b2ebb2d2ff10d1d/crio/crio-2532fbc2a2c657fe8da5867efac2630d5b060e35bdea8376ec523c4b96f04566"
	I1210 08:29:44.927909  517270 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/37a89390cb05b007560f4cf48be517c66b2ec736a5a2b34c4b2ebb2d2ff10d1d/crio/crio-2532fbc2a2c657fe8da5867efac2630d5b060e35bdea8376ec523c4b96f04566/freezer.state
	I1210 08:29:44.935543  517270 api_server.go:204] freezer state: "THAWED"
	I1210 08:29:44.935569  517270 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1210 08:29:44.943806  517270 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1210 08:29:44.943834  517270 status.go:463] multinode-894706 apiserver status = Running (err=<nil>)
	I1210 08:29:44.943845  517270 status.go:176] multinode-894706 status: &{Name:multinode-894706 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 08:29:44.943892  517270 status.go:174] checking status of multinode-894706-m02 ...
	I1210 08:29:44.944224  517270 cli_runner.go:164] Run: docker container inspect multinode-894706-m02 --format={{.State.Status}}
	I1210 08:29:44.968661  517270 status.go:371] multinode-894706-m02 host status = "Running" (err=<nil>)
	I1210 08:29:44.968683  517270 host.go:66] Checking if "multinode-894706-m02" exists ...
	I1210 08:29:44.968992  517270 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-894706-m02
	I1210 08:29:44.991051  517270 host.go:66] Checking if "multinode-894706-m02" exists ...
	I1210 08:29:44.991359  517270 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 08:29:44.991400  517270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-894706-m02
	I1210 08:29:45.027805  517270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33288 SSHKeyPath:/home/jenkins/minikube-integration/22089-376671/.minikube/machines/multinode-894706-m02/id_rsa Username:docker}
	I1210 08:29:45.146188  517270 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 08:29:45.163842  517270 status.go:176] multinode-894706-m02 status: &{Name:multinode-894706-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1210 08:29:45.163891  517270 status.go:174] checking status of multinode-894706-m03 ...
	I1210 08:29:45.164355  517270 cli_runner.go:164] Run: docker container inspect multinode-894706-m03 --format={{.State.Status}}
	I1210 08:29:45.198404  517270 status.go:371] multinode-894706-m03 host status = "Stopped" (err=<nil>)
	I1210 08:29:45.198433  517270 status.go:384] host is not running, skipping remaining checks
	I1210 08:29:45.198441  517270 status.go:176] multinode-894706-m03 status: &{Name:multinode-894706-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.50s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 node start m03 -v=5 --alsologtostderr
E1210 08:29:47.950281  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-894706 node start m03 -v=5 --alsologtostderr: (7.342322097s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-894706
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-894706
E1210 08:30:04.883133  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-894706: (25.106284565s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-894706 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-894706 --wait=true -v=5 --alsologtostderr: (53.005397823s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-894706
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.24s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-894706 node delete m03: (5.004194164s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-894706 stop: (23.807821074s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-894706 status: exit status 7 (91.031578ms)

                                                
                                                
-- stdout --
	multinode-894706
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-894706-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-894706 status --alsologtostderr: exit status 7 (100.194477ms)

                                                
                                                
-- stdout --
	multinode-894706
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-894706-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 08:31:41.207474  525130 out.go:360] Setting OutFile to fd 1 ...
	I1210 08:31:41.207600  525130 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:31:41.207609  525130 out.go:374] Setting ErrFile to fd 2...
	I1210 08:31:41.207615  525130 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:31:41.207866  525130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 08:31:41.208055  525130 out.go:368] Setting JSON to false
	I1210 08:31:41.208098  525130 mustload.go:66] Loading cluster: multinode-894706
	I1210 08:31:41.208170  525130 notify.go:221] Checking for updates...
	I1210 08:31:41.209056  525130 config.go:182] Loaded profile config "multinode-894706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 08:31:41.209085  525130 status.go:174] checking status of multinode-894706 ...
	I1210 08:31:41.209629  525130 cli_runner.go:164] Run: docker container inspect multinode-894706 --format={{.State.Status}}
	I1210 08:31:41.228111  525130 status.go:371] multinode-894706 host status = "Stopped" (err=<nil>)
	I1210 08:31:41.228136  525130 status.go:384] host is not running, skipping remaining checks
	I1210 08:31:41.228149  525130 status.go:176] multinode-894706 status: &{Name:multinode-894706 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 08:31:41.228183  525130 status.go:174] checking status of multinode-894706-m02 ...
	I1210 08:31:41.228485  525130 cli_runner.go:164] Run: docker container inspect multinode-894706-m02 --format={{.State.Status}}
	I1210 08:31:41.256665  525130 status.go:371] multinode-894706-m02 host status = "Stopped" (err=<nil>)
	I1210 08:31:41.256693  525130 status.go:384] host is not running, skipping remaining checks
	I1210 08:31:41.256708  525130 status.go:176] multinode-894706-m02 status: &{Name:multinode-894706-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-894706 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1210 08:32:27.800602  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-894706 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (49.686318707s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-894706 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.35s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-894706
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-894706-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-894706-m02 --driver=docker  --container-runtime=crio: exit status 14 (96.785517ms)

                                                
                                                
-- stdout --
	* [multinode-894706-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-894706-m02' is duplicated with machine name 'multinode-894706-m02' in profile 'multinode-894706'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-894706-m03 --driver=docker  --container-runtime=crio
E1210 08:33:01.623644  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-894706-m03 --driver=docker  --container-runtime=crio: (31.287976222s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-894706
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-894706: exit status 80 (359.276611ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-894706 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-894706-m03 already exists in multinode-894706-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-894706-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-894706-m03: (2.067465186s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.87s)

                                                
                                    
x
+
TestPreload (117.52s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-238038 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-238038 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (59.318643483s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-238038 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-238038 image pull gcr.io/k8s-minikube/busybox: (2.023230737s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-238038
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-238038: (5.934711917s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-238038 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-238038 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (47.53174385s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-238038 image list
E1210 08:35:04.882938  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:176: Cleaning up "test-preload-238038" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-238038
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-238038: (2.459911936s)
--- PASS: TestPreload (117.52s)

                                                
                                    
x
+
TestScheduledStopUnix (110.91s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-500883 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-500883 --memory=3072 --driver=docker  --container-runtime=crio: (34.454937987s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-500883 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 08:35:41.875651  539204 out.go:360] Setting OutFile to fd 1 ...
	I1210 08:35:41.875898  539204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:35:41.875944  539204 out.go:374] Setting ErrFile to fd 2...
	I1210 08:35:41.875966  539204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:35:41.876457  539204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 08:35:41.876785  539204 out.go:368] Setting JSON to false
	I1210 08:35:41.876952  539204 mustload.go:66] Loading cluster: scheduled-stop-500883
	I1210 08:35:41.877354  539204 config.go:182] Loaded profile config "scheduled-stop-500883": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 08:35:41.877448  539204 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/scheduled-stop-500883/config.json ...
	I1210 08:35:41.877659  539204 mustload.go:66] Loading cluster: scheduled-stop-500883
	I1210 08:35:41.877834  539204 config.go:182] Loaded profile config "scheduled-stop-500883": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-500883 -n scheduled-stop-500883
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-500883 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 08:35:42.362179  539294 out.go:360] Setting OutFile to fd 1 ...
	I1210 08:35:42.362409  539294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:35:42.362435  539294 out.go:374] Setting ErrFile to fd 2...
	I1210 08:35:42.362458  539294 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:35:42.362747  539294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 08:35:42.363068  539294 out.go:368] Setting JSON to false
	I1210 08:35:42.363304  539294 daemonize_unix.go:73] killing process 539221 as it is an old scheduled stop
	I1210 08:35:42.363435  539294 mustload.go:66] Loading cluster: scheduled-stop-500883
	I1210 08:35:42.363838  539294 config.go:182] Loaded profile config "scheduled-stop-500883": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 08:35:42.363938  539294 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/scheduled-stop-500883/config.json ...
	I1210 08:35:42.364140  539294 mustload.go:66] Loading cluster: scheduled-stop-500883
	I1210 08:35:42.364282  539294 config.go:182] Loaded profile config "scheduled-stop-500883": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1210 08:35:42.373143  378528 retry.go:31] will retry after 104.198µs: open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/scheduled-stop-500883/pid: no such file or directory
I1210 08:35:42.374312  378528 retry.go:31] will retry after 150.859µs: open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/scheduled-stop-500883/pid: no such file or directory
I1210 08:35:42.375447  378528 retry.go:31] will retry after 206.012µs: open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/scheduled-stop-500883/pid: no such file or directory
I1210 08:35:42.376583  378528 retry.go:31] will retry after 203.387µs: open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/scheduled-stop-500883/pid: no such file or directory
I1210 08:35:42.377719  378528 retry.go:31] will retry after 515.756µs: open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/scheduled-stop-500883/pid: no such file or directory
I1210 08:35:42.378829  378528 retry.go:31] will retry after 416.984µs: open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/scheduled-stop-500883/pid: no such file or directory
I1210 08:35:42.379947  378528 retry.go:31] will retry after 1.140262ms: open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/scheduled-stop-500883/pid: no such file or directory
I1210 08:35:42.382045  378528 retry.go:31] will retry after 1.446022ms: open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/scheduled-stop-500883/pid: no such file or directory
I1210 08:35:42.384222  378528 retry.go:31] will retry after 1.377616ms: open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/scheduled-stop-500883/pid: no such file or directory
I1210 08:35:42.386341  378528 retry.go:31] will retry after 3.326862ms: open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/scheduled-stop-500883/pid: no such file or directory
I1210 08:35:42.390538  378528 retry.go:31] will retry after 8.127464ms: open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/scheduled-stop-500883/pid: no such file or directory
I1210 08:35:42.399763  378528 retry.go:31] will retry after 10.752941ms: open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/scheduled-stop-500883/pid: no such file or directory
I1210 08:35:42.411564  378528 retry.go:31] will retry after 16.964045ms: open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/scheduled-stop-500883/pid: no such file or directory
I1210 08:35:42.428914  378528 retry.go:31] will retry after 22.723351ms: open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/scheduled-stop-500883/pid: no such file or directory
I1210 08:35:42.452196  378528 retry.go:31] will retry after 27.18929ms: open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/scheduled-stop-500883/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-500883 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-500883 -n scheduled-stop-500883
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-500883
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-500883 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1210 08:36:08.303194  539652 out.go:360] Setting OutFile to fd 1 ...
	I1210 08:36:08.303303  539652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:36:08.303314  539652 out.go:374] Setting ErrFile to fd 2...
	I1210 08:36:08.303319  539652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1210 08:36:08.303571  539652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22089-376671/.minikube/bin
	I1210 08:36:08.303859  539652 out.go:368] Setting JSON to false
	I1210 08:36:08.303957  539652 mustload.go:66] Loading cluster: scheduled-stop-500883
	I1210 08:36:08.304308  539652 config.go:182] Loaded profile config "scheduled-stop-500883": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1210 08:36:08.304383  539652 profile.go:143] Saving config to /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/scheduled-stop-500883/config.json ...
	I1210 08:36:08.304565  539652 mustload.go:66] Loading cluster: scheduled-stop-500883
	I1210 08:36:08.304677  539652 config.go:182] Loaded profile config "scheduled-stop-500883": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-500883
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-500883: exit status 7 (67.480694ms)

                                                
                                                
-- stdout --
	scheduled-stop-500883
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-500883 -n scheduled-stop-500883
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-500883 -n scheduled-stop-500883: exit status 7 (70.872113ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-500883" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-500883
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-500883: (4.820285018s)
--- PASS: TestScheduledStopUnix (110.91s)

                                                
                                    
x
+
TestInsufficientStorage (13.35s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-055567 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-055567 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.782016674s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b406756f-0427-4a8e-b2b8-e25cb8debde9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-055567] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a05bbd0b-01a5-45dd-8640-1e7fba9cab63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22089"}}
	{"specversion":"1.0","id":"29a4d023-7dd8-4f05-878e-2186a23034fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"32f22686-563a-49bf-9e52-3ba28f2dbe31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig"}}
	{"specversion":"1.0","id":"80f10f67-f619-4297-bba6-0847feb7932a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube"}}
	{"specversion":"1.0","id":"559896aa-dc87-472f-b4e2-98698887bb7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"2a1174bf-9add-4d64-b677-1be42c3cd797","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d6ea473a-883c-4294-8114-3028b0c2ba8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0f8ac761-7a58-4467-b179-41da6961d2c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3a585554-71d3-4a2d-89ef-c42e6152f159","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7635896f-6929-40d7-bcf3-d5d016063e95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"6a99ef34-bf22-4fc3-a1ae-0f3220a95d6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-055567\" primary control-plane node in \"insufficient-storage-055567\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"dcfe1856-5a67-452b-8fd9-d363825840ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765319469-22089 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3831e51a-beab-4aa9-a7a1-26db5987496f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"42678b62-91b7-4f21-94ab-01e5d6f9b0ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-055567 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-055567 --output=json --layout=cluster: exit status 7 (280.554025ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-055567","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-055567","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 08:37:09.319874  541374 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-055567" does not appear in /home/jenkins/minikube-integration/22089-376671/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-055567 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-055567 --output=json --layout=cluster: exit status 7 (304.665688ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-055567","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-055567","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 08:37:09.624084  541441 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-055567" does not appear in /home/jenkins/minikube-integration/22089-376671/kubeconfig
	E1210 08:37:09.633690  541441 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/insufficient-storage-055567/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-055567" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-055567
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-055567: (1.977575053s)
--- PASS: TestInsufficientStorage (13.35s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (299.44s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1495060172 start -p running-upgrade-356149 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1210 08:46:27.951523  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1495060172 start -p running-upgrade-356149 --memory=3072 --vm-driver=docker  --container-runtime=crio: (30.502981181s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-356149 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1210 08:47:27.800303  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:48:01.624689  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:50:04.882962  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-356149 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.973453721s)
helpers_test.go:176: Cleaning up "running-upgrade-356149" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-356149
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-356149: (1.960945519s)
--- PASS: TestRunningBinaryUpgrade (299.44s)

                                                
                                    
x
+
TestMissingContainerUpgrade (96.95s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1951105706 start -p missing-upgrade-317974 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1951105706 start -p missing-upgrade-317974 --memory=3072 --driver=docker  --container-runtime=crio: (54.694663129s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-317974
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-317974
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-317974 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-317974 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.624717062s)
helpers_test.go:176: Cleaning up "missing-upgrade-317974" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-317974
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-317974: (2.012424141s)
--- PASS: TestMissingContainerUpgrade (96.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-783391 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-783391 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (121.432251ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-783391] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22089
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22089-376671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22089-376671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestPause/serial/Start (89.3s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-767596 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-767596 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m29.298685481s)
--- PASS: TestPause/serial/Start (89.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-783391 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1210 08:37:27.801247  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-783391 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.851066636s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-783391 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-783391 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-783391 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.474269975s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-783391 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-783391 status -o json: exit status 2 (373.319296ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-783391","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-783391
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-783391: (2.262275245s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-783391 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1210 08:38:01.624211  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-783391 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.023222597s)
--- PASS: TestNoKubernetes/serial/Start (8.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22089-376671/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-783391 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-783391 "sudo systemctl is-active --quiet service kubelet": exit status 1 (290.459237ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-783391
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-783391: (1.367001952s)
--- PASS: TestNoKubernetes/serial/Stop (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-783391 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-783391 --driver=docker  --container-runtime=crio: (6.853606147s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-783391 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-783391 "sudo systemctl is-active --quiet service kubelet": exit status 1 (276.636032ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (123.45s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-767596 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1210 08:38:50.878901  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-767596 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (2m3.426114159s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (123.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (300.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3438259773 start -p stopped-upgrade-136036 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1210 08:41:04.692568  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3438259773 start -p stopped-upgrade-136036 --memory=3072 --vm-driver=docker  --container-runtime=crio: (36.609575401s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3438259773 -p stopped-upgrade-136036 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3438259773 -p stopped-upgrade-136036 stop: (1.242106916s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-136036 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1210 08:42:27.800836  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/addons-054300/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:43:01.623818  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-314220/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1210 08:45:04.883466  378528 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22089-376671/.minikube/profiles/functional-446865/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-136036 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m23.104753309s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (300.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-136036
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-136036: (1.739528042s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.74s)

                                                
                                    

Test skip (36/316)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.43
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0.01
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
151 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
152 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
153 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-393659 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-393659" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-393659
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
Copied to clipboard